00:00:00.001 Started by upstream project "autotest-per-patch" build number 126252 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.087 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.088 The recommended git tool is: git 00:00:00.088 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.134 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.209 > git --version # 'git version 2.39.2' 00:00:00.209 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.239 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.302 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.314 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:04.314 > git config core.sparsecheckout # timeout=10 00:00:04.327 > git read-tree -mu HEAD # timeout=10 00:00:04.344 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:04.363 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:04.364 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:04.491 [Pipeline] Start of Pipeline 00:00:04.504 [Pipeline] library 00:00:04.505 Loading library shm_lib@master 00:00:04.505 Library shm_lib@master is cached. Copying from home. 00:00:04.521 [Pipeline] node 00:00:04.528 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.531 [Pipeline] { 00:00:04.539 [Pipeline] catchError 00:00:04.541 [Pipeline] { 00:00:04.553 [Pipeline] wrap 00:00:04.561 [Pipeline] { 00:00:04.568 [Pipeline] stage 00:00:04.571 [Pipeline] { (Prologue) 00:00:04.761 [Pipeline] sh 00:00:05.040 + logger -p user.info -t JENKINS-CI 00:00:05.059 [Pipeline] echo 00:00:05.061 Node: WFP8 00:00:05.068 [Pipeline] sh 00:00:05.362 [Pipeline] setCustomBuildProperty 00:00:05.372 [Pipeline] echo 00:00:05.373 Cleanup processes 00:00:05.377 [Pipeline] sh 00:00:05.656 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.657 717017 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.667 [Pipeline] sh 00:00:05.946 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.946 ++ grep -v 'sudo pgrep' 00:00:05.946 ++ awk '{print $1}' 00:00:05.946 + sudo kill -9 00:00:05.946 + true 00:00:05.959 [Pipeline] cleanWs 00:00:05.968 [WS-CLEANUP] Deleting project workspace... 00:00:05.968 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.973 [WS-CLEANUP] done 00:00:05.977 [Pipeline] setCustomBuildProperty 00:00:05.992 [Pipeline] sh 00:00:06.269 + sudo git config --global --replace-all safe.directory '*' 00:00:06.357 [Pipeline] httpRequest 00:00:06.381 [Pipeline] echo 00:00:06.383 Sorcerer 10.211.164.101 is alive 00:00:06.391 [Pipeline] httpRequest 00:00:06.396 HttpMethod: GET 00:00:06.396 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.397 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:06.411 Response Code: HTTP/1.1 200 OK 00:00:06.411 Success: Status code 200 is in the accepted range: 200,404 00:00:06.411 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.308 [Pipeline] sh 00:00:08.594 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.613 [Pipeline] httpRequest 00:00:08.647 [Pipeline] echo 00:00:08.649 Sorcerer 10.211.164.101 is alive 00:00:08.659 [Pipeline] httpRequest 00:00:08.664 HttpMethod: GET 00:00:08.665 URL: http://10.211.164.101/packages/spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:00:08.665 Sending request to url: http://10.211.164.101/packages/spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:00:08.678 Response Code: HTTP/1.1 200 OK 00:00:08.678 Success: Status code 200 is in the accepted range: 200,404 00:00:08.679 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:01:23.433 [Pipeline] sh 00:01:23.713 + tar --no-same-owner -xf spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:01:26.258 [Pipeline] sh 00:01:26.541 + git -C spdk log --oneline -n5 00:01:26.541 00bf4c571 scripts/perf: Include per-node hugepages stats in collect-vmstat 00:01:26.541 958a93494 scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:01:26.541 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:26.541 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:26.541 2d30d9f83 accel: introduce tasks in sequence limit 00:01:26.553 [Pipeline] } 00:01:26.571 [Pipeline] // stage 00:01:26.581 [Pipeline] stage 00:01:26.584 [Pipeline] { (Prepare) 00:01:26.602 [Pipeline] writeFile 00:01:26.619 [Pipeline] sh 00:01:26.900 + logger -p user.info -t JENKINS-CI 00:01:26.913 [Pipeline] sh 00:01:27.196 + logger -p user.info -t JENKINS-CI 00:01:27.209 [Pipeline] sh 00:01:27.495 + cat autorun-spdk.conf 00:01:27.495 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.495 SPDK_TEST_NVMF=1 00:01:27.495 SPDK_TEST_NVME_CLI=1 00:01:27.495 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.495 SPDK_TEST_NVMF_NICS=e810 00:01:27.495 SPDK_TEST_VFIOUSER=1 00:01:27.495 SPDK_RUN_UBSAN=1 00:01:27.495 NET_TYPE=phy 00:01:27.502 RUN_NIGHTLY=0 00:01:27.511 [Pipeline] readFile 00:01:27.539 [Pipeline] withEnv 00:01:27.542 [Pipeline] { 00:01:27.557 [Pipeline] sh 00:01:27.841 + set -ex 00:01:27.841 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:27.841 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:27.841 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:27.841 ++ SPDK_TEST_NVMF=1 00:01:27.841 ++ SPDK_TEST_NVME_CLI=1 00:01:27.841 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:27.841 ++ SPDK_TEST_NVMF_NICS=e810 00:01:27.841 ++ SPDK_TEST_VFIOUSER=1 00:01:27.841 ++ SPDK_RUN_UBSAN=1 00:01:27.841 ++ NET_TYPE=phy 00:01:27.841 ++ RUN_NIGHTLY=0 00:01:27.841 + case $SPDK_TEST_NVMF_NICS in 00:01:27.841 + DRIVERS=ice 00:01:27.841 + [[ tcp == \r\d\m\a ]] 00:01:27.841 + [[ -n ice ]] 00:01:27.841 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:27.841 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:27.841 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:27.841 rmmod: ERROR: Module irdma is not currently loaded 00:01:27.841 rmmod: ERROR: Module i40iw is not currently loaded 00:01:27.841 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:27.841 + true 00:01:27.841 + for D in $DRIVERS 00:01:27.841 + sudo modprobe ice 00:01:27.841 + exit 0 00:01:27.852 [Pipeline] } 00:01:27.871 [Pipeline] // withEnv 00:01:27.877 [Pipeline] } 00:01:27.894 [Pipeline] // stage 00:01:27.905 [Pipeline] catchError 00:01:27.907 [Pipeline] { 00:01:27.924 [Pipeline] timeout 00:01:27.924 Timeout set to expire in 50 min 00:01:27.926 [Pipeline] { 00:01:27.942 [Pipeline] stage 00:01:27.944 [Pipeline] { (Tests) 00:01:27.960 [Pipeline] sh 00:01:28.242 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.242 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.242 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.242 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:28.242 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:28.242 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.242 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:28.242 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.243 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:28.243 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:28.243 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:28.243 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:28.243 + source /etc/os-release 00:01:28.243 ++ NAME='Fedora Linux' 00:01:28.243 ++ VERSION='38 (Cloud Edition)' 00:01:28.243 ++ ID=fedora 00:01:28.243 ++ VERSION_ID=38 00:01:28.243 ++ VERSION_CODENAME= 00:01:28.243 ++ PLATFORM_ID=platform:f38 00:01:28.243 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:28.243 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:28.243 ++ LOGO=fedora-logo-icon 00:01:28.243 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:28.243 ++ HOME_URL=https://fedoraproject.org/ 00:01:28.243 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:28.243 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:28.243 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:28.243 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:28.243 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:28.243 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:28.243 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:28.243 ++ SUPPORT_END=2024-05-14 00:01:28.243 ++ VARIANT='Cloud Edition' 00:01:28.243 ++ VARIANT_ID=cloud 00:01:28.243 + uname -a 00:01:28.243 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:28.243 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:30.147 Hugepages 00:01:30.147 node hugesize free / total 00:01:30.147 node0 1048576kB 0 / 0 00:01:30.147 node0 2048kB 0 / 0 00:01:30.147 node1 1048576kB 0 / 0 00:01:30.147 node1 2048kB 0 / 0 00:01:30.147 00:01:30.147 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.147 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:30.147 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:30.147 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:30.147 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:30.147 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:30.147 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:30.147 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:30.147 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:30.147 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:30.147 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:30.147 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:30.147 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:30.148 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:30.148 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:30.148 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:30.148 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:30.148 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:30.148 + rm -f /tmp/spdk-ld-path 00:01:30.148 + source autorun-spdk.conf 00:01:30.148 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.148 ++ SPDK_TEST_NVMF=1 00:01:30.148 ++ SPDK_TEST_NVME_CLI=1 00:01:30.148 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.148 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.148 ++ SPDK_TEST_VFIOUSER=1 00:01:30.148 ++ SPDK_RUN_UBSAN=1 00:01:30.148 ++ NET_TYPE=phy 00:01:30.148 ++ RUN_NIGHTLY=0 00:01:30.148 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.148 + [[ -n '' ]] 00:01:30.148 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.148 + for M in /var/spdk/build-*-manifest.txt 00:01:30.148 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.148 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:30.148 + for M in /var/spdk/build-*-manifest.txt 00:01:30.148 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.148 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:30.148 ++ uname 00:01:30.148 + [[ Linux == \L\i\n\u\x ]] 00:01:30.148 + sudo dmesg -T 00:01:30.148 + sudo dmesg --clear 00:01:30.148 + dmesg_pid=718454 00:01:30.148 + [[ Fedora Linux == FreeBSD ]] 00:01:30.148 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.148 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.148 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.148 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.148 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.148 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.148 + sudo dmesg -Tw 00:01:30.148 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.148 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.148 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.148 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.148 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.148 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.148 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.148 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.148 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.148 Test configuration: 00:01:30.148 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.148 SPDK_TEST_NVMF=1 00:01:30.148 SPDK_TEST_NVME_CLI=1 00:01:30.148 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.148 SPDK_TEST_NVMF_NICS=e810 00:01:30.148 SPDK_TEST_VFIOUSER=1 00:01:30.148 SPDK_RUN_UBSAN=1 00:01:30.148 NET_TYPE=phy 00:01:30.408 RUN_NIGHTLY=0 23:27:19 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:30.408 23:27:19 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:30.408 23:27:19 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:30.408 23:27:19 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:30.408 23:27:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.408 23:27:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.408 23:27:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.408 23:27:19 -- paths/export.sh@5 -- $ export PATH 00:01:30.408 23:27:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:30.408 23:27:19 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:30.408 23:27:19 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:30.408 23:27:19 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721078839.XXXXXX 00:01:30.408 23:27:19 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721078839.RZ6vzK 00:01:30.408 23:27:19 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:30.408 23:27:19 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:30.408 23:27:19 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:30.408 23:27:19 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:30.408 23:27:19 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:30.408 23:27:19 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:30.408 23:27:19 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:01:30.408 23:27:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.408 23:27:19 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:30.408 23:27:19 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:30.408 23:27:19 -- pm/common@17 -- $ local monitor 00:01:30.408 23:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.408 23:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.408 23:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.408 23:27:19 -- pm/common@21 -- $ date +%s 00:01:30.408 23:27:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:30.408 23:27:19 -- pm/common@21 -- $ date +%s 00:01:30.408 23:27:19 -- pm/common@25 -- $ sleep 1 00:01:30.408 23:27:19 -- pm/common@21 -- $ date +%s 00:01:30.408 23:27:19 -- pm/common@21 -- $ date +%s 00:01:30.408 23:27:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078839 00:01:30.408 23:27:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078839 00:01:30.408 23:27:19 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078839 00:01:30.408 23:27:19 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078839 00:01:30.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078839_collect-vmstat.pm.log 00:01:30.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078839_collect-cpu-load.pm.log 00:01:30.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078839_collect-cpu-temp.pm.log 00:01:30.408 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078839_collect-bmc-pm.bmc.pm.log 00:01:31.347 23:27:20 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:31.347 23:27:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:31.347 23:27:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:31.347 23:27:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:31.347 23:27:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:31.347 Mon Jul 15 09:27:20 PM UTC 2024 00:01:31.347 23:27:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:31.347 v24.09-pre-211-g00bf4c571 00:01:31.347 23:27:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:31.347 23:27:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.347 23:27:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.347 23:27:20 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:01:31.347 23:27:20 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:01:31.347 23:27:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.347 ************************************ 00:01:31.347 START TEST ubsan 00:01:31.347 ************************************ 00:01:31.347 23:27:20 ubsan -- common/autotest_common.sh@1117 -- $ echo 'using ubsan' 00:01:31.347 using ubsan 00:01:31.347 00:01:31.347 real 0m0.000s 00:01:31.347 user 0m0.000s 00:01:31.347 sys 0m0.000s 00:01:31.347 23:27:20 ubsan -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:01:31.347 23:27:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.347 ************************************ 00:01:31.347 END TEST ubsan 00:01:31.347 ************************************ 00:01:31.347 23:27:20 -- common/autotest_common.sh@1136 -- $ return 0 00:01:31.347 23:27:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.347 23:27:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.347 23:27:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.347 23:27:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.347 23:27:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.347 23:27:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:31.347 23:27:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:31.347 23:27:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:31.347 23:27:20 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:31.607 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:31.607 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.865 Using 'verbs' RDMA provider 00:01:45.018 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:54.992 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:54.992 Creating mk/config.mk...done. 00:01:54.992 Creating mk/cc.flags.mk...done. 00:01:54.992 Type 'make' to build. 00:01:54.992 23:27:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:54.992 23:27:43 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:01:54.992 23:27:43 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:01:54.992 23:27:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.992 ************************************ 00:01:54.992 START TEST make 00:01:54.992 ************************************ 00:01:54.992 23:27:43 make -- common/autotest_common.sh@1117 -- $ make -j96 00:01:54.992 make[1]: Nothing to be done for 'all'. 00:01:55.934 The Meson build system 00:01:55.934 Version: 1.3.1 00:01:55.934 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:55.934 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:55.934 Build type: native build 00:01:55.934 Project name: libvfio-user 00:01:55.934 Project version: 0.0.1 00:01:55.934 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:55.934 C linker for the host machine: cc ld.bfd 2.39-16 00:01:55.934 Host machine cpu family: x86_64 00:01:55.934 Host machine cpu: x86_64 00:01:55.934 Run-time dependency threads found: YES 00:01:55.934 Library dl found: YES 00:01:55.934 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:55.934 Run-time dependency json-c found: YES 0.17 00:01:55.934 Run-time dependency cmocka found: YES 1.1.7 00:01:55.934 Program pytest-3 found: NO 00:01:55.934 Program flake8 found: NO 00:01:55.934 Program misspell-fixer found: NO 00:01:55.934 Program restructuredtext-lint found: NO 00:01:55.934 Program valgrind found: YES (/usr/bin/valgrind) 00:01:55.934 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.934 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.934 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.934 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:55.934 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:55.934 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:55.934 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:55.934 Build targets in project: 8 00:01:55.934 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:55.934 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:55.934 00:01:55.934 libvfio-user 0.0.1 00:01:55.934 00:01:55.934 User defined options 00:01:55.934 buildtype : debug 00:01:55.934 default_library: shared 00:01:55.934 libdir : /usr/local/lib 00:01:55.934 00:01:55.934 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.501 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:56.501 [1/37] Compiling C object samples/null.p/null.c.o 00:01:56.501 [2/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:56.501 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:56.501 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:56.501 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:56.501 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:56.501 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:56.501 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:56.501 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:56.501 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:56.501 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:56.501 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:56.501 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:56.501 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:56.501 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:56.501 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:56.501 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:56.501 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:56.501 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:56.501 [20/37] Compiling C object samples/server.p/server.c.o 00:01:56.501 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:56.501 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:56.501 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:56.501 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:56.759 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:56.759 [26/37] Compiling C object samples/client.p/client.c.o 00:01:56.759 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:56.759 [28/37] Linking target samples/client 00:01:56.759 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:56.759 [30/37] Linking target test/unit_tests 00:01:56.759 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:57.018 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:57.018 [33/37] Linking target samples/lspci 00:01:57.018 [34/37] Linking target samples/null 00:01:57.018 [35/37] Linking target samples/server 00:01:57.018 [36/37] Linking target samples/gpio-pci-idio-16 00:01:57.018 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:57.018 INFO: autodetecting backend as ninja 00:01:57.018 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.018 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:57.277 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:57.277 ninja: no work to do. 00:02:02.541 The Meson build system 00:02:02.541 Version: 1.3.1 00:02:02.541 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:02.541 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:02.541 Build type: native build 00:02:02.541 Program cat found: YES (/usr/bin/cat) 00:02:02.541 Project name: DPDK 00:02:02.541 Project version: 24.03.0 00:02:02.541 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:02.541 C linker for the host machine: cc ld.bfd 2.39-16 00:02:02.541 Host machine cpu family: x86_64 00:02:02.541 Host machine cpu: x86_64 00:02:02.541 Message: ## Building in Developer Mode ## 00:02:02.541 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:02.541 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:02.541 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:02.541 Program python3 found: YES (/usr/bin/python3) 00:02:02.541 Program cat found: YES (/usr/bin/cat) 00:02:02.541 Compiler for C supports arguments -march=native: YES 00:02:02.541 Checking for size of "void *" : 8 00:02:02.541 Checking for size of "void *" : 8 (cached) 00:02:02.541 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:02.541 Library m found: YES 00:02:02.541 Library numa found: YES 00:02:02.541 Has header "numaif.h" : YES 00:02:02.541 Library fdt found: NO 00:02:02.541 Library execinfo found: NO 00:02:02.541 Has header "execinfo.h" : YES 00:02:02.541 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:02.541 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:02.541 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:02.541 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:02.541 Run-time dependency openssl found: YES 3.0.9 00:02:02.541 Run-time dependency libpcap found: YES 1.10.4 00:02:02.541 Has header "pcap.h" with dependency libpcap: YES 00:02:02.541 Compiler for C supports arguments -Wcast-qual: YES 00:02:02.541 Compiler for C supports arguments -Wdeprecated: YES 00:02:02.541 Compiler for C supports arguments -Wformat: YES 00:02:02.541 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:02.541 Compiler for C supports arguments -Wformat-security: NO 00:02:02.541 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:02.541 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:02.541 Compiler for C supports arguments -Wnested-externs: YES 00:02:02.541 Compiler for C supports arguments -Wold-style-definition: YES 00:02:02.541 Compiler for C supports arguments -Wpointer-arith: YES 00:02:02.541 Compiler for C supports arguments -Wsign-compare: YES 00:02:02.541 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:02.541 Compiler for C supports arguments -Wundef: YES 00:02:02.541 Compiler for C supports arguments -Wwrite-strings: YES 00:02:02.541 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:02.541 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:02.541 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:02.541 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:02.541 Program objdump found: YES (/usr/bin/objdump) 00:02:02.541 Compiler for C supports arguments -mavx512f: YES 00:02:02.541 Checking if "AVX512 checking" compiles: YES 00:02:02.541 Fetching value of define "__SSE4_2__" : 1 00:02:02.541 Fetching value of define "__AES__" : 1 00:02:02.541 Fetching value of define "__AVX__" : 1 00:02:02.542 Fetching value of define "__AVX2__" : 1 00:02:02.542 Fetching value of define "__AVX512BW__" : 1 00:02:02.542 Fetching value of define "__AVX512CD__" : 1 00:02:02.542 Fetching value of define "__AVX512DQ__" : 1 00:02:02.542 Fetching value of define "__AVX512F__" : 1 00:02:02.542 Fetching value of define "__AVX512VL__" : 1 00:02:02.542 Fetching value of define "__PCLMUL__" : 1 00:02:02.542 Fetching value of define "__RDRND__" : 1 00:02:02.542 Fetching value of define "__RDSEED__" : 1 00:02:02.542 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:02.542 Fetching value of define "__znver1__" : (undefined) 00:02:02.542 Fetching value of define "__znver2__" : (undefined) 00:02:02.542 Fetching value of define "__znver3__" : (undefined) 00:02:02.542 Fetching value of define "__znver4__" : (undefined) 00:02:02.542 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:02.542 Message: lib/log: Defining dependency "log" 00:02:02.542 Message: lib/kvargs: Defining dependency "kvargs" 00:02:02.542 Message: lib/telemetry: Defining dependency "telemetry" 00:02:02.542 Checking for function "getentropy" : NO 00:02:02.542 Message: lib/eal: Defining dependency "eal" 00:02:02.542 Message: lib/ring: Defining dependency "ring" 00:02:02.542 Message: lib/rcu: Defining dependency "rcu" 00:02:02.542 Message: lib/mempool: Defining dependency "mempool" 00:02:02.542 Message: lib/mbuf: Defining dependency "mbuf" 00:02:02.542 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:02.542 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:02.542 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:02.542 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:02.542 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:02.542 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:02.542 Compiler for C supports arguments -mpclmul: YES 00:02:02.542 Compiler for C supports arguments -maes: YES 00:02:02.542 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:02.542 Compiler for C supports arguments -mavx512bw: YES 00:02:02.542 Compiler for C supports arguments -mavx512dq: YES 00:02:02.542 Compiler for C supports arguments -mavx512vl: YES 00:02:02.542 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:02.542 Compiler for C supports arguments -mavx2: YES 00:02:02.542 Compiler for C supports arguments -mavx: YES 00:02:02.542 Message: lib/net: Defining dependency "net" 00:02:02.542 Message: lib/meter: Defining dependency "meter" 00:02:02.542 Message: lib/ethdev: Defining dependency "ethdev" 00:02:02.542 Message: lib/pci: Defining dependency "pci" 00:02:02.542 Message: lib/cmdline: Defining dependency "cmdline" 00:02:02.542 Message: lib/hash: Defining dependency "hash" 00:02:02.542 Message: lib/timer: Defining dependency "timer" 00:02:02.542 Message: lib/compressdev: Defining dependency "compressdev" 00:02:02.542 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:02.542 Message: lib/dmadev: Defining dependency "dmadev" 00:02:02.542 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:02.542 Message: lib/power: Defining dependency "power" 00:02:02.542 Message: lib/reorder: Defining dependency "reorder" 00:02:02.542 Message: lib/security: Defining dependency "security" 00:02:02.542 Has header "linux/userfaultfd.h" : YES 00:02:02.542 Has header "linux/vduse.h" : YES 00:02:02.542 Message: lib/vhost: Defining dependency "vhost" 00:02:02.542 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:02.542 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:02.542 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:02.542 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:02.542 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:02.542 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:02.542 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:02.542 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:02.542 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:02.542 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:02.542 Program doxygen found: YES (/usr/bin/doxygen) 00:02:02.542 Configuring doxy-api-html.conf using configuration 00:02:02.542 Configuring doxy-api-man.conf using configuration 00:02:02.542 Program mandb found: YES (/usr/bin/mandb) 00:02:02.542 Program sphinx-build found: NO 00:02:02.542 Configuring rte_build_config.h using configuration 00:02:02.542 Message: 00:02:02.542 ================= 00:02:02.542 Applications Enabled 00:02:02.542 ================= 00:02:02.542 00:02:02.542 apps: 00:02:02.542 00:02:02.542 00:02:02.542 Message: 00:02:02.542 ================= 00:02:02.542 Libraries Enabled 00:02:02.542 ================= 00:02:02.542 00:02:02.542 libs: 00:02:02.542 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:02.542 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:02.542 cryptodev, dmadev, power, reorder, security, vhost, 00:02:02.542 00:02:02.542 Message: 00:02:02.542 =============== 00:02:02.542 Drivers Enabled 00:02:02.542 =============== 00:02:02.542 00:02:02.542 common: 00:02:02.542 00:02:02.542 bus: 00:02:02.542 pci, vdev, 00:02:02.542 mempool: 00:02:02.542 ring, 00:02:02.542 dma: 00:02:02.542 00:02:02.542 net: 00:02:02.542 00:02:02.542 crypto: 00:02:02.542 00:02:02.542 compress: 00:02:02.542 00:02:02.542 vdpa: 00:02:02.542 00:02:02.542 00:02:02.542 Message: 00:02:02.542 ================= 00:02:02.542 Content Skipped 00:02:02.542 ================= 00:02:02.542 00:02:02.542 apps: 00:02:02.542 dumpcap: explicitly disabled via build config 00:02:02.542 graph: explicitly disabled via build config 00:02:02.542 pdump: explicitly disabled via build config 00:02:02.542 proc-info: explicitly disabled via build config 00:02:02.542 test-acl: explicitly disabled via build config 00:02:02.542 test-bbdev: explicitly disabled via build config 00:02:02.542 test-cmdline: explicitly disabled via build config 00:02:02.542 test-compress-perf: explicitly disabled via build config 00:02:02.542 test-crypto-perf: explicitly disabled via build config 00:02:02.542 test-dma-perf: explicitly disabled via build config 00:02:02.542 test-eventdev: explicitly disabled via build config 00:02:02.542 test-fib: explicitly disabled via build config 00:02:02.542 test-flow-perf: explicitly disabled via build config 00:02:02.542 test-gpudev: explicitly disabled via build config 00:02:02.542 test-mldev: explicitly disabled via build config 00:02:02.542 test-pipeline: explicitly disabled via build config 00:02:02.542 test-pmd: explicitly disabled via build config 00:02:02.542 test-regex: explicitly disabled via build config 00:02:02.542 test-sad: explicitly disabled via build config 00:02:02.542 test-security-perf: explicitly disabled via build config 00:02:02.542 00:02:02.542 libs: 00:02:02.542 argparse: explicitly disabled via build config 00:02:02.542 metrics: explicitly disabled via build config 00:02:02.542 acl: explicitly disabled via build config 00:02:02.542 bbdev: explicitly disabled via build config 00:02:02.542 bitratestats: explicitly disabled via build config 00:02:02.542 bpf: explicitly disabled via build config 00:02:02.542 cfgfile: explicitly disabled via build config 00:02:02.542 distributor: explicitly disabled via build config 00:02:02.542 efd: explicitly disabled via build config 00:02:02.542 eventdev: explicitly disabled via build config 00:02:02.542 dispatcher: explicitly disabled via build config 00:02:02.542 gpudev: explicitly disabled via build config 00:02:02.542 gro: explicitly disabled via build config 00:02:02.542 gso: explicitly disabled via build config 00:02:02.542 ip_frag: explicitly disabled via build config 00:02:02.542 jobstats: explicitly disabled via build config 00:02:02.542 latencystats: explicitly disabled via build config 00:02:02.542 lpm: explicitly disabled via build config 00:02:02.542 member: explicitly disabled via build config 00:02:02.542 pcapng: explicitly disabled via build config 00:02:02.542 rawdev: explicitly disabled via build config 00:02:02.542 regexdev: explicitly disabled via build config 00:02:02.542 mldev: explicitly disabled via build config 00:02:02.542 rib: explicitly disabled via build config 00:02:02.542 sched: explicitly disabled via build config 00:02:02.542 stack: explicitly disabled via build config 00:02:02.542 ipsec: explicitly disabled via build config 00:02:02.542 pdcp: explicitly disabled via build config 00:02:02.542 fib: explicitly disabled via build config 00:02:02.542 port: explicitly disabled via build config 00:02:02.542 pdump: explicitly disabled via build config 00:02:02.542 table: explicitly disabled via build config 00:02:02.542 pipeline: explicitly disabled via build config 00:02:02.542 graph: explicitly disabled via build config 00:02:02.542 node: explicitly disabled via build config 00:02:02.542 00:02:02.542 drivers: 00:02:02.542 common/cpt: not in enabled drivers build config 00:02:02.542 common/dpaax: not in enabled drivers build config 00:02:02.542 common/iavf: not in enabled drivers build config 00:02:02.542 common/idpf: not in enabled drivers build config 00:02:02.542 common/ionic: not in enabled drivers build config 00:02:02.542 common/mvep: not in enabled drivers build config 00:02:02.542 common/octeontx: not in enabled drivers build config 00:02:02.542 bus/auxiliary: not in enabled drivers build config 00:02:02.542 bus/cdx: not in enabled drivers build config 00:02:02.542 bus/dpaa: not in enabled drivers build config 00:02:02.542 bus/fslmc: not in enabled drivers build config 00:02:02.542 bus/ifpga: not in enabled drivers build config 00:02:02.542 bus/platform: not in enabled drivers build config 00:02:02.542 bus/uacce: not in enabled drivers build config 00:02:02.542 bus/vmbus: not in enabled drivers build config 00:02:02.542 common/cnxk: not in enabled drivers build config 00:02:02.542 common/mlx5: not in enabled drivers build config 00:02:02.542 common/nfp: not in enabled drivers build config 00:02:02.542 common/nitrox: not in enabled drivers build config 00:02:02.542 common/qat: not in enabled drivers build config 00:02:02.542 common/sfc_efx: not in enabled drivers build config 00:02:02.542 mempool/bucket: not in enabled drivers build config 00:02:02.542 mempool/cnxk: not in enabled drivers build config 00:02:02.542 mempool/dpaa: not in enabled drivers build config 00:02:02.542 mempool/dpaa2: not in enabled drivers build config 00:02:02.542 mempool/octeontx: not in enabled drivers build config 00:02:02.542 mempool/stack: not in enabled drivers build config 00:02:02.542 dma/cnxk: not in enabled drivers build config 00:02:02.542 dma/dpaa: not in enabled drivers build config 00:02:02.542 dma/dpaa2: not in enabled drivers build config 00:02:02.542 dma/hisilicon: not in enabled drivers build config 00:02:02.542 dma/idxd: not in enabled drivers build config 00:02:02.543 dma/ioat: not in enabled drivers build config 00:02:02.543 dma/skeleton: not in enabled drivers build config 00:02:02.543 net/af_packet: not in enabled drivers build config 00:02:02.543 net/af_xdp: not in enabled drivers build config 00:02:02.543 net/ark: not in enabled drivers build config 00:02:02.543 net/atlantic: not in enabled drivers build config 00:02:02.543 net/avp: not in enabled drivers build config 00:02:02.543 net/axgbe: not in enabled drivers build config 00:02:02.543 net/bnx2x: not in enabled drivers build config 00:02:02.543 net/bnxt: not in enabled drivers build config 00:02:02.543 net/bonding: not in enabled drivers build config 00:02:02.543 net/cnxk: not in enabled drivers build config 00:02:02.543 net/cpfl: not in enabled drivers build config 00:02:02.543 net/cxgbe: not in enabled drivers build config 00:02:02.543 net/dpaa: not in enabled drivers build config 00:02:02.543 net/dpaa2: not in enabled drivers build config 00:02:02.543 net/e1000: not in enabled drivers build config 00:02:02.543 net/ena: not in enabled drivers build config 00:02:02.543 net/enetc: not in enabled drivers build config 00:02:02.543 net/enetfec: not in enabled drivers build config 00:02:02.543 net/enic: not in enabled drivers build config 00:02:02.543 net/failsafe: not in enabled drivers build config 00:02:02.543 net/fm10k: not in enabled drivers build config 00:02:02.543 net/gve: not in enabled drivers build config 00:02:02.543 net/hinic: not in enabled drivers build config 00:02:02.543 net/hns3: not in enabled drivers build config 00:02:02.543 net/i40e: not in enabled drivers build config 00:02:02.543 net/iavf: not in enabled drivers build config 00:02:02.543 net/ice: not in enabled drivers build config 00:02:02.543 net/idpf: not in enabled drivers build config 00:02:02.543 net/igc: not in enabled drivers build config 00:02:02.543 net/ionic: not in enabled drivers build config 00:02:02.543 net/ipn3ke: not in enabled drivers build config 00:02:02.543 net/ixgbe: not in enabled drivers build config 00:02:02.543 net/mana: not in enabled drivers build config 00:02:02.543 net/memif: not in enabled drivers build config 00:02:02.543 net/mlx4: not in enabled drivers build config 00:02:02.543 net/mlx5: not in enabled drivers build config 00:02:02.543 net/mvneta: not in enabled drivers build config 00:02:02.543 net/mvpp2: not in enabled drivers build config 00:02:02.543 net/netvsc: not in enabled drivers build config 00:02:02.543 net/nfb: not in enabled drivers build config 00:02:02.543 net/nfp: not in enabled drivers build config 00:02:02.543 net/ngbe: not in enabled drivers build config 00:02:02.543 net/null: not in enabled drivers build config 00:02:02.543 net/octeontx: not in enabled drivers build config 00:02:02.543 net/octeon_ep: not in enabled drivers build config 00:02:02.543 net/pcap: not in enabled drivers build config 00:02:02.543 net/pfe: not in enabled drivers build config 00:02:02.543 net/qede: not in enabled drivers build config 00:02:02.543 net/ring: not in enabled drivers build config 00:02:02.543 net/sfc: not in enabled drivers build config 00:02:02.543 net/softnic: not in enabled drivers build config 00:02:02.543 net/tap: not in enabled drivers build config 00:02:02.543 net/thunderx: not in enabled drivers build config 00:02:02.543 net/txgbe: not in enabled drivers build config 00:02:02.543 net/vdev_netvsc: not in enabled drivers build config 00:02:02.543 net/vhost: not in enabled drivers build config 00:02:02.543 net/virtio: not in enabled drivers build config 00:02:02.543 net/vmxnet3: not in enabled drivers build config 00:02:02.543 raw/*: missing internal dependency, "rawdev" 00:02:02.543 crypto/armv8: not in enabled drivers build config 00:02:02.543 crypto/bcmfs: not in enabled drivers build config 00:02:02.543 crypto/caam_jr: not in enabled drivers build config 00:02:02.543 crypto/ccp: not in enabled drivers build config 00:02:02.543 crypto/cnxk: not in enabled drivers build config 00:02:02.543 crypto/dpaa_sec: not in enabled drivers build config 00:02:02.543 crypto/dpaa2_sec: not in enabled drivers build config 00:02:02.543 crypto/ipsec_mb: not in enabled drivers build config 00:02:02.543 crypto/mlx5: not in enabled drivers build config 00:02:02.543 crypto/mvsam: not in enabled drivers build config 00:02:02.543 crypto/nitrox: not in enabled drivers build config 00:02:02.543 crypto/null: not in enabled drivers build config 00:02:02.543 crypto/octeontx: not in enabled drivers build config 00:02:02.543 crypto/openssl: not in enabled drivers build config 00:02:02.543 crypto/scheduler: not in enabled drivers build config 00:02:02.543 crypto/uadk: not in enabled drivers build config 00:02:02.543 crypto/virtio: not in enabled drivers build config 00:02:02.543 compress/isal: not in enabled drivers build config 00:02:02.543 compress/mlx5: not in enabled drivers build config 00:02:02.543 compress/nitrox: not in enabled drivers build config 00:02:02.543 compress/octeontx: not in enabled drivers build config 00:02:02.543 compress/zlib: not in enabled drivers build config 00:02:02.543 regex/*: missing internal dependency, "regexdev" 00:02:02.543 ml/*: missing internal dependency, "mldev" 00:02:02.543 vdpa/ifc: not in enabled drivers build config 00:02:02.543 vdpa/mlx5: not in enabled drivers build config 00:02:02.543 vdpa/nfp: not in enabled drivers build config 00:02:02.543 vdpa/sfc: not in enabled drivers build config 00:02:02.543 event/*: missing internal dependency, "eventdev" 00:02:02.543 baseband/*: missing internal dependency, "bbdev" 00:02:02.543 gpu/*: missing internal dependency, "gpudev" 00:02:02.543 00:02:02.543 00:02:02.543 Build targets in project: 85 00:02:02.543 00:02:02.543 DPDK 24.03.0 00:02:02.543 00:02:02.543 User defined options 00:02:02.543 buildtype : debug 00:02:02.543 default_library : shared 00:02:02.543 libdir : lib 00:02:02.543 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:02.543 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:02.543 c_link_args : 00:02:02.543 cpu_instruction_set: native 00:02:02.543 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:02.543 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:02.543 enable_docs : false 00:02:02.543 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:02.543 enable_kmods : false 00:02:02.543 max_lcores : 128 00:02:02.543 tests : false 00:02:02.543 00:02:02.543 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.812 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:02.812 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.812 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:02.812 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:03.096 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:03.096 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:03.096 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:03.096 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:03.096 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:03.096 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:03.096 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:03.096 [11/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:03.096 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:03.096 [13/268] Linking static target lib/librte_kvargs.a 00:02:03.096 [14/268] Linking static target lib/librte_log.a 00:02:03.096 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:03.096 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:03.096 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:03.096 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:03.096 [19/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:03.096 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:03.096 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:03.096 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:03.096 [23/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:03.096 [24/268] Linking static target lib/librte_pci.a 00:02:03.416 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:03.416 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:03.416 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:03.416 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:03.416 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:03.416 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:03.416 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:03.416 [32/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:03.416 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:03.416 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:03.416 [35/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.416 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:03.416 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:03.416 [38/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:03.416 [39/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.416 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:03.416 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:03.416 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:03.416 [43/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:03.416 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:03.416 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:03.416 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:03.416 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:03.416 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:03.416 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:03.416 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:03.416 [51/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.416 [52/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.416 [53/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.416 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:03.416 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:03.416 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:03.416 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:03.416 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:03.416 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:03.416 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:03.416 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:03.416 [62/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.416 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.416 [64/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:03.416 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:03.674 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:03.674 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:03.674 [68/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:03.674 [69/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:03.674 [70/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:03.674 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:03.674 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:03.674 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:03.674 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:03.674 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:03.674 [76/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.674 [77/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.674 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:03.675 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:03.675 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:03.675 [81/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.675 [82/268] Linking static target lib/librte_telemetry.a 00:02:03.675 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:03.675 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:03.675 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:03.675 [86/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.675 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:03.675 [88/268] Linking static target lib/librte_meter.a 00:02:03.675 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.675 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:03.675 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:03.675 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:03.675 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.675 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:03.675 [95/268] Linking static target lib/librte_ring.a 00:02:03.675 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:03.675 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:03.675 [98/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:03.675 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:03.675 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.675 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.675 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.675 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:03.675 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.675 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.675 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:03.675 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.675 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.675 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.675 [110/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.675 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:03.675 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:03.675 [113/268] Linking static target lib/librte_rcu.a 00:02:03.675 [114/268] Linking static target lib/librte_mempool.a 00:02:03.675 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:03.675 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.675 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:03.675 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:03.675 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:03.675 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.675 [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.675 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:03.675 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:03.675 [124/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.675 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:03.675 [126/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.675 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:03.675 [128/268] Linking static target lib/librte_net.a 00:02:03.675 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:03.675 [130/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.675 [131/268] Linking static target lib/librte_cmdline.a 00:02:03.675 [132/268] Linking static target lib/librte_eal.a 00:02:03.675 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:03.675 [134/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.675 [135/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.675 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:03.675 [137/268] Linking static target lib/librte_mbuf.a 00:02:03.933 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.933 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:03.933 [140/268] Linking target lib/librte_log.so.24.1 00:02:03.933 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:03.933 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:03.933 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:03.933 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.933 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:03.933 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:03.933 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.933 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.933 [149/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.933 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.933 [151/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.933 [152/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.933 [153/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:03.933 [154/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.933 [155/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:03.933 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.933 [157/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.933 [158/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:03.933 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:03.933 [160/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.933 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.933 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.933 [163/268] Linking static target lib/librte_dmadev.a 00:02:03.933 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:03.933 [165/268] Linking target lib/librte_kvargs.so.24.1 00:02:03.933 [166/268] Linking target lib/librte_telemetry.so.24.1 00:02:03.933 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.933 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:03.933 [169/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:03.933 [170/268] Linking static target lib/librte_timer.a 00:02:03.933 [171/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:03.933 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:03.933 [173/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.933 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:03.933 [175/268] Linking static target lib/librte_reorder.a 00:02:04.191 [176/268] Linking static target lib/librte_compressdev.a 00:02:04.191 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:04.191 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:04.191 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.191 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.191 [181/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:04.191 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:04.191 [183/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.191 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:04.191 [185/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:04.191 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:04.191 [187/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.191 [188/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.191 [189/268] Linking static target drivers/librte_bus_vdev.a 00:02:04.191 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.191 [191/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.191 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.191 [193/268] Linking static target lib/librte_power.a 00:02:04.191 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.191 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.191 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:04.191 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:04.191 [198/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:04.191 [199/268] Linking static target lib/librte_security.a 00:02:04.191 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:04.191 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.191 [202/268] Linking static target lib/librte_hash.a 00:02:04.447 [203/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.447 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.447 [205/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:04.447 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:04.447 [207/268] Linking static target lib/librte_cryptodev.a 00:02:04.447 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.447 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.447 [210/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.447 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:04.447 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:04.447 [213/268] Linking static target drivers/librte_mempool_ring.a 00:02:04.447 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.447 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.447 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.447 [217/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.447 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.705 [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.705 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.705 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:04.705 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.705 [223/268] Linking static target lib/librte_ethdev.a 00:02:04.705 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:04.962 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.962 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.220 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.786 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:05.786 [229/268] Linking static target lib/librte_vhost.a 00:02:06.044 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.945 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.208 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.208 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.208 [234/268] Linking target lib/librte_eal.so.24.1 00:02:13.467 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.467 [236/268] Linking target lib/librte_meter.so.24.1 00:02:13.467 [237/268] Linking target lib/librte_ring.so.24.1 00:02:13.467 [238/268] Linking target lib/librte_timer.so.24.1 00:02:13.467 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.467 [240/268] Linking target lib/librte_pci.so.24.1 00:02:13.467 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.467 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.467 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.467 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.467 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.467 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.467 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.725 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:13.725 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:13.725 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.725 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.725 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.725 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.984 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.984 [255/268] Linking target lib/librte_net.so.24.1 00:02:13.984 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:13.984 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:13.984 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:13.984 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:13.984 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:14.243 [261/268] Linking target lib/librte_security.so.24.1 00:02:14.243 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:14.243 [263/268] Linking target lib/librte_hash.so.24.1 00:02:14.243 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:14.243 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.243 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:14.243 [267/268] Linking target lib/librte_power.so.24.1 00:02:14.243 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:14.243 INFO: autodetecting backend as ninja 00:02:14.243 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:15.193 CC lib/log/log.o 00:02:15.193 CC lib/log/log_flags.o 00:02:15.193 CC lib/log/log_deprecated.o 00:02:15.193 CC lib/ut_mock/mock.o 00:02:15.452 CC lib/ut/ut.o 00:02:15.452 LIB libspdk_log.a 00:02:15.452 LIB libspdk_ut_mock.a 00:02:15.452 LIB libspdk_ut.a 00:02:15.452 SO libspdk_log.so.7.0 00:02:15.452 SO libspdk_ut_mock.so.6.0 00:02:15.452 SO libspdk_ut.so.2.0 00:02:15.452 SYMLINK libspdk_log.so 00:02:15.452 SYMLINK libspdk_ut_mock.so 00:02:15.710 SYMLINK libspdk_ut.so 00:02:15.710 CXX lib/trace_parser/trace.o 00:02:15.968 CC lib/util/base64.o 00:02:15.968 CC lib/dma/dma.o 00:02:15.968 CC lib/util/bit_array.o 00:02:15.968 CC lib/util/crc16.o 00:02:15.968 CC lib/util/cpuset.o 00:02:15.968 CC lib/util/crc32.o 00:02:15.968 CC lib/util/crc32c.o 00:02:15.968 CC lib/util/crc64.o 00:02:15.968 CC lib/util/crc32_ieee.o 00:02:15.968 CC lib/util/dif.o 00:02:15.968 CC lib/util/file.o 00:02:15.968 CC lib/util/fd.o 00:02:15.968 CC lib/util/hexlify.o 00:02:15.968 CC lib/util/iov.o 00:02:15.968 CC lib/util/strerror_tls.o 00:02:15.968 CC lib/util/math.o 00:02:15.968 CC lib/util/pipe.o 00:02:15.968 CC lib/util/string.o 00:02:15.968 CC lib/util/uuid.o 00:02:15.968 CC lib/util/fd_group.o 00:02:15.968 CC lib/util/xor.o 00:02:15.968 CC lib/ioat/ioat.o 00:02:15.968 CC lib/util/zipf.o 00:02:15.968 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.968 CC lib/vfio_user/host/vfio_user.o 00:02:15.968 LIB libspdk_dma.a 00:02:15.968 SO libspdk_dma.so.4.0 00:02:16.227 SYMLINK libspdk_dma.so 00:02:16.227 LIB libspdk_ioat.a 00:02:16.227 SO libspdk_ioat.so.7.0 00:02:16.227 LIB libspdk_vfio_user.a 00:02:16.227 SYMLINK libspdk_ioat.so 00:02:16.227 SO libspdk_vfio_user.so.5.0 00:02:16.227 LIB libspdk_util.a 00:02:16.227 SYMLINK libspdk_vfio_user.so 00:02:16.485 SO libspdk_util.so.9.1 00:02:16.485 SYMLINK libspdk_util.so 00:02:16.485 LIB libspdk_trace_parser.a 00:02:16.485 SO libspdk_trace_parser.so.5.0 00:02:16.743 SYMLINK libspdk_trace_parser.so 00:02:16.743 CC lib/idxd/idxd.o 00:02:16.743 CC lib/idxd/idxd_user.o 00:02:16.743 CC lib/idxd/idxd_kernel.o 00:02:16.743 CC lib/conf/conf.o 00:02:16.743 CC lib/json/json_parse.o 00:02:16.743 CC lib/rdma_provider/common.o 00:02:16.743 CC lib/env_dpdk/memory.o 00:02:16.743 CC lib/json/json_util.o 00:02:16.743 CC lib/env_dpdk/env.o 00:02:16.743 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:16.743 CC lib/json/json_write.o 00:02:16.743 CC lib/env_dpdk/init.o 00:02:16.743 CC lib/env_dpdk/pci.o 00:02:16.743 CC lib/env_dpdk/threads.o 00:02:16.743 CC lib/env_dpdk/pci_ioat.o 00:02:16.743 CC lib/env_dpdk/pci_vmd.o 00:02:16.743 CC lib/env_dpdk/pci_virtio.o 00:02:16.743 CC lib/vmd/led.o 00:02:16.743 CC lib/vmd/vmd.o 00:02:16.743 CC lib/env_dpdk/pci_idxd.o 00:02:16.743 CC lib/env_dpdk/pci_event.o 00:02:16.743 CC lib/env_dpdk/sigbus_handler.o 00:02:16.743 CC lib/env_dpdk/pci_dpdk.o 00:02:16.743 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:16.743 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:16.743 CC lib/rdma_utils/rdma_utils.o 00:02:17.001 LIB libspdk_rdma_provider.a 00:02:17.001 LIB libspdk_conf.a 00:02:17.001 SO libspdk_rdma_provider.so.6.0 00:02:17.001 SO libspdk_conf.so.6.0 00:02:17.001 LIB libspdk_json.a 00:02:17.001 LIB libspdk_rdma_utils.a 00:02:17.001 SYMLINK libspdk_rdma_provider.so 00:02:17.001 SYMLINK libspdk_conf.so 00:02:17.001 SO libspdk_json.so.6.0 00:02:17.001 SO libspdk_rdma_utils.so.1.0 00:02:17.001 SYMLINK libspdk_rdma_utils.so 00:02:17.001 SYMLINK libspdk_json.so 00:02:17.260 LIB libspdk_idxd.a 00:02:17.260 SO libspdk_idxd.so.12.0 00:02:17.260 LIB libspdk_vmd.a 00:02:17.260 SYMLINK libspdk_idxd.so 00:02:17.260 SO libspdk_vmd.so.6.0 00:02:17.260 SYMLINK libspdk_vmd.so 00:02:17.519 CC lib/jsonrpc/jsonrpc_server.o 00:02:17.519 CC lib/jsonrpc/jsonrpc_client.o 00:02:17.519 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:17.519 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:17.519 LIB libspdk_jsonrpc.a 00:02:17.777 SO libspdk_jsonrpc.so.6.0 00:02:17.777 SYMLINK libspdk_jsonrpc.so 00:02:17.777 LIB libspdk_env_dpdk.a 00:02:17.777 SO libspdk_env_dpdk.so.14.1 00:02:18.036 SYMLINK libspdk_env_dpdk.so 00:02:18.036 CC lib/rpc/rpc.o 00:02:18.294 LIB libspdk_rpc.a 00:02:18.295 SO libspdk_rpc.so.6.0 00:02:18.295 SYMLINK libspdk_rpc.so 00:02:18.553 CC lib/trace/trace.o 00:02:18.553 CC lib/trace/trace_flags.o 00:02:18.553 CC lib/trace/trace_rpc.o 00:02:18.553 CC lib/keyring/keyring.o 00:02:18.553 CC lib/keyring/keyring_rpc.o 00:02:18.553 CC lib/notify/notify.o 00:02:18.553 CC lib/notify/notify_rpc.o 00:02:18.811 LIB libspdk_notify.a 00:02:18.811 SO libspdk_notify.so.6.0 00:02:18.811 LIB libspdk_trace.a 00:02:18.811 LIB libspdk_keyring.a 00:02:18.811 SO libspdk_trace.so.10.0 00:02:18.811 SO libspdk_keyring.so.1.0 00:02:18.811 SYMLINK libspdk_notify.so 00:02:18.811 SYMLINK libspdk_keyring.so 00:02:18.811 SYMLINK libspdk_trace.so 00:02:19.070 CC lib/sock/sock.o 00:02:19.070 CC lib/sock/sock_rpc.o 00:02:19.070 CC lib/thread/thread.o 00:02:19.070 CC lib/thread/iobuf.o 00:02:19.328 LIB libspdk_sock.a 00:02:19.587 SO libspdk_sock.so.10.0 00:02:19.587 SYMLINK libspdk_sock.so 00:02:19.845 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:19.845 CC lib/nvme/nvme_ctrlr.o 00:02:19.845 CC lib/nvme/nvme_fabric.o 00:02:19.845 CC lib/nvme/nvme_ns_cmd.o 00:02:19.845 CC lib/nvme/nvme_pcie_common.o 00:02:19.845 CC lib/nvme/nvme_ns.o 00:02:19.845 CC lib/nvme/nvme_pcie.o 00:02:19.845 CC lib/nvme/nvme_qpair.o 00:02:19.845 CC lib/nvme/nvme.o 00:02:19.845 CC lib/nvme/nvme_quirks.o 00:02:19.845 CC lib/nvme/nvme_transport.o 00:02:19.845 CC lib/nvme/nvme_discovery.o 00:02:19.845 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:19.845 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:19.845 CC lib/nvme/nvme_tcp.o 00:02:19.845 CC lib/nvme/nvme_opal.o 00:02:19.845 CC lib/nvme/nvme_io_msg.o 00:02:19.845 CC lib/nvme/nvme_zns.o 00:02:19.845 CC lib/nvme/nvme_poll_group.o 00:02:19.845 CC lib/nvme/nvme_stubs.o 00:02:19.845 CC lib/nvme/nvme_auth.o 00:02:19.845 CC lib/nvme/nvme_cuse.o 00:02:19.845 CC lib/nvme/nvme_vfio_user.o 00:02:19.845 CC lib/nvme/nvme_rdma.o 00:02:20.103 LIB libspdk_thread.a 00:02:20.387 SO libspdk_thread.so.10.1 00:02:20.387 SYMLINK libspdk_thread.so 00:02:20.651 CC lib/virtio/virtio.o 00:02:20.651 CC lib/vfu_tgt/tgt_endpoint.o 00:02:20.652 CC lib/blob/blobstore.o 00:02:20.652 CC lib/vfu_tgt/tgt_rpc.o 00:02:20.652 CC lib/blob/request.o 00:02:20.652 CC lib/virtio/virtio_vhost_user.o 00:02:20.652 CC lib/blob/zeroes.o 00:02:20.652 CC lib/virtio/virtio_vfio_user.o 00:02:20.652 CC lib/virtio/virtio_pci.o 00:02:20.652 CC lib/blob/blob_bs_dev.o 00:02:20.652 CC lib/accel/accel_rpc.o 00:02:20.652 CC lib/accel/accel.o 00:02:20.652 CC lib/accel/accel_sw.o 00:02:20.652 CC lib/init/json_config.o 00:02:20.652 CC lib/init/subsystem.o 00:02:20.652 CC lib/init/subsystem_rpc.o 00:02:20.652 CC lib/init/rpc.o 00:02:20.910 LIB libspdk_init.a 00:02:20.910 LIB libspdk_vfu_tgt.a 00:02:20.910 LIB libspdk_virtio.a 00:02:20.910 SO libspdk_init.so.5.0 00:02:20.910 SO libspdk_virtio.so.7.0 00:02:20.910 SO libspdk_vfu_tgt.so.3.0 00:02:20.910 SYMLINK libspdk_init.so 00:02:20.910 SYMLINK libspdk_vfu_tgt.so 00:02:20.910 SYMLINK libspdk_virtio.so 00:02:21.167 CC lib/event/app.o 00:02:21.167 CC lib/event/log_rpc.o 00:02:21.167 CC lib/event/reactor.o 00:02:21.167 CC lib/event/app_rpc.o 00:02:21.167 CC lib/event/scheduler_static.o 00:02:21.425 LIB libspdk_accel.a 00:02:21.425 SO libspdk_accel.so.15.1 00:02:21.425 SYMLINK libspdk_accel.so 00:02:21.425 LIB libspdk_nvme.a 00:02:21.425 SO libspdk_nvme.so.13.1 00:02:21.425 LIB libspdk_event.a 00:02:21.683 SO libspdk_event.so.14.0 00:02:21.683 SYMLINK libspdk_event.so 00:02:21.683 CC lib/bdev/bdev.o 00:02:21.683 CC lib/bdev/bdev_rpc.o 00:02:21.683 CC lib/bdev/bdev_zone.o 00:02:21.683 CC lib/bdev/scsi_nvme.o 00:02:21.683 CC lib/bdev/part.o 00:02:21.683 SYMLINK libspdk_nvme.so 00:02:22.615 LIB libspdk_blob.a 00:02:22.615 SO libspdk_blob.so.11.0 00:02:22.873 SYMLINK libspdk_blob.so 00:02:23.130 CC lib/blobfs/blobfs.o 00:02:23.130 CC lib/blobfs/tree.o 00:02:23.130 CC lib/lvol/lvol.o 00:02:23.386 LIB libspdk_bdev.a 00:02:23.386 SO libspdk_bdev.so.15.1 00:02:23.642 SYMLINK libspdk_bdev.so 00:02:23.642 LIB libspdk_blobfs.a 00:02:23.642 SO libspdk_blobfs.so.10.0 00:02:23.642 LIB libspdk_lvol.a 00:02:23.642 SYMLINK libspdk_blobfs.so 00:02:23.642 SO libspdk_lvol.so.10.0 00:02:23.900 SYMLINK libspdk_lvol.so 00:02:23.900 CC lib/scsi/dev.o 00:02:23.900 CC lib/scsi/lun.o 00:02:23.900 CC lib/scsi/port.o 00:02:23.900 CC lib/scsi/scsi.o 00:02:23.900 CC lib/scsi/scsi_bdev.o 00:02:23.900 CC lib/scsi/scsi_pr.o 00:02:23.900 CC lib/scsi/scsi_rpc.o 00:02:23.900 CC lib/scsi/task.o 00:02:23.900 CC lib/nbd/nbd.o 00:02:23.900 CC lib/nbd/nbd_rpc.o 00:02:23.900 CC lib/ublk/ublk.o 00:02:23.900 CC lib/ublk/ublk_rpc.o 00:02:23.900 CC lib/nvmf/ctrlr.o 00:02:23.900 CC lib/nvmf/ctrlr_discovery.o 00:02:23.900 CC lib/ftl/ftl_core.o 00:02:23.900 CC lib/nvmf/ctrlr_bdev.o 00:02:23.900 CC lib/ftl/ftl_debug.o 00:02:23.900 CC lib/ftl/ftl_init.o 00:02:23.900 CC lib/ftl/ftl_layout.o 00:02:23.900 CC lib/nvmf/subsystem.o 00:02:23.900 CC lib/nvmf/nvmf_rpc.o 00:02:23.900 CC lib/nvmf/nvmf.o 00:02:23.900 CC lib/ftl/ftl_io.o 00:02:23.900 CC lib/nvmf/transport.o 00:02:23.900 CC lib/ftl/ftl_sb.o 00:02:23.900 CC lib/nvmf/tcp.o 00:02:23.900 CC lib/ftl/ftl_l2p.o 00:02:23.900 CC lib/nvmf/mdns_server.o 00:02:23.900 CC lib/nvmf/stubs.o 00:02:23.900 CC lib/ftl/ftl_l2p_flat.o 00:02:23.900 CC lib/nvmf/vfio_user.o 00:02:23.900 CC lib/ftl/ftl_nv_cache.o 00:02:23.900 CC lib/nvmf/rdma.o 00:02:23.900 CC lib/ftl/ftl_band_ops.o 00:02:23.900 CC lib/nvmf/auth.o 00:02:23.900 CC lib/ftl/ftl_band.o 00:02:23.900 CC lib/ftl/ftl_writer.o 00:02:23.900 CC lib/ftl/ftl_rq.o 00:02:23.900 CC lib/ftl/ftl_l2p_cache.o 00:02:23.900 CC lib/ftl/ftl_reloc.o 00:02:23.900 CC lib/ftl/ftl_p2l.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:23.900 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:23.900 CC lib/ftl/utils/ftl_conf.o 00:02:23.900 CC lib/ftl/utils/ftl_mempool.o 00:02:23.900 CC lib/ftl/utils/ftl_md.o 00:02:23.900 CC lib/ftl/utils/ftl_property.o 00:02:23.900 CC lib/ftl/utils/ftl_bitmap.o 00:02:23.900 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:23.900 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:23.900 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:23.900 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:23.900 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:23.900 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:23.900 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:23.900 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:23.900 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:23.900 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:23.900 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:23.900 CC lib/ftl/base/ftl_base_dev.o 00:02:23.900 CC lib/ftl/base/ftl_base_bdev.o 00:02:23.900 CC lib/ftl/ftl_trace.o 00:02:24.464 LIB libspdk_scsi.a 00:02:24.464 LIB libspdk_nbd.a 00:02:24.464 SO libspdk_scsi.so.9.0 00:02:24.464 SO libspdk_nbd.so.7.0 00:02:24.464 LIB libspdk_ublk.a 00:02:24.464 SYMLINK libspdk_nbd.so 00:02:24.464 SYMLINK libspdk_scsi.so 00:02:24.464 SO libspdk_ublk.so.3.0 00:02:24.722 SYMLINK libspdk_ublk.so 00:02:24.722 CC lib/iscsi/conn.o 00:02:24.722 CC lib/iscsi/init_grp.o 00:02:24.722 CC lib/iscsi/iscsi.o 00:02:24.722 CC lib/iscsi/param.o 00:02:24.722 CC lib/iscsi/md5.o 00:02:24.722 CC lib/iscsi/portal_grp.o 00:02:24.722 CC lib/iscsi/tgt_node.o 00:02:24.722 CC lib/vhost/vhost.o 00:02:24.722 CC lib/iscsi/iscsi_subsystem.o 00:02:24.722 CC lib/vhost/vhost_rpc.o 00:02:24.722 CC lib/iscsi/iscsi_rpc.o 00:02:24.722 CC lib/vhost/vhost_scsi.o 00:02:24.722 CC lib/iscsi/task.o 00:02:24.722 CC lib/vhost/vhost_blk.o 00:02:24.722 CC lib/vhost/rte_vhost_user.o 00:02:24.722 LIB libspdk_ftl.a 00:02:24.979 SO libspdk_ftl.so.9.0 00:02:25.236 SYMLINK libspdk_ftl.so 00:02:25.509 LIB libspdk_vhost.a 00:02:25.509 LIB libspdk_nvmf.a 00:02:25.767 SO libspdk_vhost.so.8.0 00:02:25.767 SO libspdk_nvmf.so.19.0 00:02:25.767 SYMLINK libspdk_vhost.so 00:02:25.767 LIB libspdk_iscsi.a 00:02:25.767 SYMLINK libspdk_nvmf.so 00:02:25.767 SO libspdk_iscsi.so.8.0 00:02:26.025 SYMLINK libspdk_iscsi.so 00:02:26.593 CC module/env_dpdk/env_dpdk_rpc.o 00:02:26.593 CC module/vfu_device/vfu_virtio.o 00:02:26.593 CC module/vfu_device/vfu_virtio_blk.o 00:02:26.593 CC module/vfu_device/vfu_virtio_scsi.o 00:02:26.593 CC module/vfu_device/vfu_virtio_rpc.o 00:02:26.593 LIB libspdk_env_dpdk_rpc.a 00:02:26.593 CC module/accel/error/accel_error.o 00:02:26.593 CC module/accel/dsa/accel_dsa.o 00:02:26.593 CC module/accel/error/accel_error_rpc.o 00:02:26.593 CC module/accel/dsa/accel_dsa_rpc.o 00:02:26.593 CC module/accel/ioat/accel_ioat.o 00:02:26.593 CC module/blob/bdev/blob_bdev.o 00:02:26.593 CC module/accel/ioat/accel_ioat_rpc.o 00:02:26.593 CC module/sock/posix/posix.o 00:02:26.593 CC module/scheduler/gscheduler/gscheduler.o 00:02:26.593 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:26.593 SO libspdk_env_dpdk_rpc.so.6.0 00:02:26.593 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:26.593 CC module/keyring/file/keyring_rpc.o 00:02:26.593 CC module/keyring/file/keyring.o 00:02:26.593 CC module/accel/iaa/accel_iaa.o 00:02:26.593 CC module/accel/iaa/accel_iaa_rpc.o 00:02:26.593 CC module/keyring/linux/keyring_rpc.o 00:02:26.593 CC module/keyring/linux/keyring.o 00:02:26.593 SYMLINK libspdk_env_dpdk_rpc.so 00:02:26.852 LIB libspdk_scheduler_gscheduler.a 00:02:26.852 LIB libspdk_accel_error.a 00:02:26.852 LIB libspdk_keyring_file.a 00:02:26.852 LIB libspdk_scheduler_dpdk_governor.a 00:02:26.852 LIB libspdk_keyring_linux.a 00:02:26.852 LIB libspdk_accel_ioat.a 00:02:26.852 SO libspdk_scheduler_gscheduler.so.4.0 00:02:26.852 SO libspdk_accel_error.so.2.0 00:02:26.852 LIB libspdk_scheduler_dynamic.a 00:02:26.852 SO libspdk_accel_ioat.so.6.0 00:02:26.852 SO libspdk_keyring_file.so.1.0 00:02:26.852 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:26.852 SO libspdk_keyring_linux.so.1.0 00:02:26.852 SO libspdk_scheduler_dynamic.so.4.0 00:02:26.852 LIB libspdk_accel_iaa.a 00:02:26.852 LIB libspdk_accel_dsa.a 00:02:26.852 LIB libspdk_blob_bdev.a 00:02:26.852 SO libspdk_accel_iaa.so.3.0 00:02:26.852 SYMLINK libspdk_accel_error.so 00:02:26.852 SYMLINK libspdk_scheduler_gscheduler.so 00:02:26.852 SYMLINK libspdk_accel_ioat.so 00:02:26.852 SO libspdk_accel_dsa.so.5.0 00:02:26.852 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:26.852 SYMLINK libspdk_keyring_file.so 00:02:26.852 SO libspdk_blob_bdev.so.11.0 00:02:26.852 SYMLINK libspdk_keyring_linux.so 00:02:26.852 SYMLINK libspdk_scheduler_dynamic.so 00:02:26.852 SYMLINK libspdk_accel_dsa.so 00:02:26.852 SYMLINK libspdk_accel_iaa.so 00:02:26.852 SYMLINK libspdk_blob_bdev.so 00:02:26.852 LIB libspdk_vfu_device.a 00:02:26.852 SO libspdk_vfu_device.so.3.0 00:02:27.110 SYMLINK libspdk_vfu_device.so 00:02:27.110 LIB libspdk_sock_posix.a 00:02:27.110 SO libspdk_sock_posix.so.6.0 00:02:27.368 SYMLINK libspdk_sock_posix.so 00:02:27.368 CC module/bdev/error/vbdev_error.o 00:02:27.368 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.368 CC module/bdev/delay/vbdev_delay.o 00:02:27.368 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.368 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.368 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.368 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.368 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.368 CC module/bdev/malloc/bdev_malloc.o 00:02:27.368 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.368 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.368 CC module/bdev/aio/bdev_aio.o 00:02:27.368 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.368 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.368 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.368 CC module/bdev/split/vbdev_split.o 00:02:27.368 CC module/bdev/gpt/gpt.o 00:02:27.368 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.368 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.368 CC module/bdev/null/bdev_null.o 00:02:27.368 CC module/bdev/null/bdev_null_rpc.o 00:02:27.368 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.368 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.368 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.368 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.368 CC module/bdev/raid/bdev_raid.o 00:02:27.368 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.368 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.368 CC module/bdev/nvme/bdev_nvme.o 00:02:27.368 CC module/bdev/raid/raid0.o 00:02:27.368 CC module/bdev/ftl/bdev_ftl.o 00:02:27.368 CC module/bdev/raid/raid1.o 00:02:27.368 CC module/bdev/raid/concat.o 00:02:27.368 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.368 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.368 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.368 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.368 CC module/bdev/nvme/nvme_rpc.o 00:02:27.368 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.368 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.368 CC module/bdev/nvme/vbdev_opal.o 00:02:27.368 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:27.626 LIB libspdk_blobfs_bdev.a 00:02:27.626 SO libspdk_blobfs_bdev.so.6.0 00:02:27.626 LIB libspdk_bdev_split.a 00:02:27.626 LIB libspdk_bdev_error.a 00:02:27.626 SO libspdk_bdev_split.so.6.0 00:02:27.626 LIB libspdk_bdev_null.a 00:02:27.626 SYMLINK libspdk_blobfs_bdev.so 00:02:27.626 LIB libspdk_bdev_ftl.a 00:02:27.626 LIB libspdk_bdev_gpt.a 00:02:27.626 SO libspdk_bdev_error.so.6.0 00:02:27.626 SO libspdk_bdev_null.so.6.0 00:02:27.626 SO libspdk_bdev_ftl.so.6.0 00:02:27.626 LIB libspdk_bdev_malloc.a 00:02:27.626 LIB libspdk_bdev_aio.a 00:02:27.627 SO libspdk_bdev_gpt.so.6.0 00:02:27.627 LIB libspdk_bdev_passthru.a 00:02:27.627 SYMLINK libspdk_bdev_split.so 00:02:27.627 LIB libspdk_bdev_delay.a 00:02:27.627 LIB libspdk_bdev_zone_block.a 00:02:27.627 SYMLINK libspdk_bdev_error.so 00:02:27.627 LIB libspdk_bdev_iscsi.a 00:02:27.627 SO libspdk_bdev_aio.so.6.0 00:02:27.627 SO libspdk_bdev_malloc.so.6.0 00:02:27.627 SO libspdk_bdev_delay.so.6.0 00:02:27.627 SYMLINK libspdk_bdev_null.so 00:02:27.627 SO libspdk_bdev_passthru.so.6.0 00:02:27.627 SYMLINK libspdk_bdev_ftl.so 00:02:27.627 SO libspdk_bdev_zone_block.so.6.0 00:02:27.884 SO libspdk_bdev_iscsi.so.6.0 00:02:27.884 SYMLINK libspdk_bdev_gpt.so 00:02:27.884 SYMLINK libspdk_bdev_malloc.so 00:02:27.884 SYMLINK libspdk_bdev_passthru.so 00:02:27.884 SYMLINK libspdk_bdev_aio.so 00:02:27.884 SYMLINK libspdk_bdev_delay.so 00:02:27.884 LIB libspdk_bdev_lvol.a 00:02:27.884 SYMLINK libspdk_bdev_zone_block.so 00:02:27.884 SYMLINK libspdk_bdev_iscsi.so 00:02:27.884 LIB libspdk_bdev_virtio.a 00:02:27.884 SO libspdk_bdev_lvol.so.6.0 00:02:27.884 SO libspdk_bdev_virtio.so.6.0 00:02:27.884 SYMLINK libspdk_bdev_lvol.so 00:02:27.884 SYMLINK libspdk_bdev_virtio.so 00:02:28.142 LIB libspdk_bdev_raid.a 00:02:28.142 SO libspdk_bdev_raid.so.6.0 00:02:28.142 SYMLINK libspdk_bdev_raid.so 00:02:29.077 LIB libspdk_bdev_nvme.a 00:02:29.077 SO libspdk_bdev_nvme.so.7.0 00:02:29.077 SYMLINK libspdk_bdev_nvme.so 00:02:29.643 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:29.643 CC module/event/subsystems/iobuf/iobuf.o 00:02:29.643 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:29.643 CC module/event/subsystems/vmd/vmd.o 00:02:29.643 CC module/event/subsystems/scheduler/scheduler.o 00:02:29.643 CC module/event/subsystems/sock/sock.o 00:02:29.643 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:29.643 CC module/event/subsystems/keyring/keyring.o 00:02:29.643 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:29.901 LIB libspdk_event_vhost_blk.a 00:02:29.901 LIB libspdk_event_vmd.a 00:02:29.901 SO libspdk_event_vhost_blk.so.3.0 00:02:29.901 LIB libspdk_event_sock.a 00:02:29.901 LIB libspdk_event_keyring.a 00:02:29.901 LIB libspdk_event_scheduler.a 00:02:29.901 LIB libspdk_event_iobuf.a 00:02:29.901 LIB libspdk_event_vfu_tgt.a 00:02:29.901 SO libspdk_event_sock.so.5.0 00:02:29.901 SO libspdk_event_vmd.so.6.0 00:02:29.901 SO libspdk_event_keyring.so.1.0 00:02:29.901 SO libspdk_event_iobuf.so.3.0 00:02:29.901 SO libspdk_event_scheduler.so.4.0 00:02:29.901 SYMLINK libspdk_event_vhost_blk.so 00:02:29.901 SO libspdk_event_vfu_tgt.so.3.0 00:02:29.901 SYMLINK libspdk_event_sock.so 00:02:29.901 SYMLINK libspdk_event_vmd.so 00:02:29.901 SYMLINK libspdk_event_keyring.so 00:02:29.901 SYMLINK libspdk_event_iobuf.so 00:02:29.901 SYMLINK libspdk_event_scheduler.so 00:02:29.901 SYMLINK libspdk_event_vfu_tgt.so 00:02:30.159 CC module/event/subsystems/accel/accel.o 00:02:30.418 LIB libspdk_event_accel.a 00:02:30.418 SO libspdk_event_accel.so.6.0 00:02:30.418 SYMLINK libspdk_event_accel.so 00:02:30.677 CC module/event/subsystems/bdev/bdev.o 00:02:30.935 LIB libspdk_event_bdev.a 00:02:30.935 SO libspdk_event_bdev.so.6.0 00:02:30.935 SYMLINK libspdk_event_bdev.so 00:02:31.194 CC module/event/subsystems/ublk/ublk.o 00:02:31.194 CC module/event/subsystems/nbd/nbd.o 00:02:31.194 CC module/event/subsystems/scsi/scsi.o 00:02:31.194 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.194 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.194 LIB libspdk_event_ublk.a 00:02:31.453 LIB libspdk_event_nbd.a 00:02:31.453 SO libspdk_event_ublk.so.3.0 00:02:31.453 LIB libspdk_event_scsi.a 00:02:31.453 SO libspdk_event_nbd.so.6.0 00:02:31.453 SO libspdk_event_scsi.so.6.0 00:02:31.453 LIB libspdk_event_nvmf.a 00:02:31.453 SYMLINK libspdk_event_ublk.so 00:02:31.453 SYMLINK libspdk_event_nbd.so 00:02:31.453 SO libspdk_event_nvmf.so.6.0 00:02:31.453 SYMLINK libspdk_event_scsi.so 00:02:31.453 SYMLINK libspdk_event_nvmf.so 00:02:31.710 CC module/event/subsystems/iscsi/iscsi.o 00:02:31.710 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:31.968 LIB libspdk_event_iscsi.a 00:02:31.968 LIB libspdk_event_vhost_scsi.a 00:02:31.968 SO libspdk_event_vhost_scsi.so.3.0 00:02:31.968 SO libspdk_event_iscsi.so.6.0 00:02:31.968 SYMLINK libspdk_event_vhost_scsi.so 00:02:31.968 SYMLINK libspdk_event_iscsi.so 00:02:32.226 SO libspdk.so.6.0 00:02:32.226 SYMLINK libspdk.so 00:02:32.497 CXX app/trace/trace.o 00:02:32.497 CC app/spdk_nvme_identify/identify.o 00:02:32.497 CC app/spdk_top/spdk_top.o 00:02:32.497 CC app/spdk_nvme_perf/perf.o 00:02:32.497 CC app/trace_record/trace_record.o 00:02:32.497 CC app/spdk_nvme_discover/discovery_aer.o 00:02:32.497 TEST_HEADER include/spdk/accel.h 00:02:32.497 TEST_HEADER include/spdk/accel_module.h 00:02:32.497 TEST_HEADER include/spdk/assert.h 00:02:32.497 TEST_HEADER include/spdk/barrier.h 00:02:32.497 CC app/spdk_lspci/spdk_lspci.o 00:02:32.497 TEST_HEADER include/spdk/base64.h 00:02:32.497 TEST_HEADER include/spdk/bdev.h 00:02:32.497 TEST_HEADER include/spdk/bdev_zone.h 00:02:32.497 TEST_HEADER include/spdk/bdev_module.h 00:02:32.497 TEST_HEADER include/spdk/bit_pool.h 00:02:32.497 TEST_HEADER include/spdk/bit_array.h 00:02:32.497 CC test/rpc_client/rpc_client_test.o 00:02:32.497 TEST_HEADER include/spdk/blob_bdev.h 00:02:32.497 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:32.497 TEST_HEADER include/spdk/blobfs.h 00:02:32.497 TEST_HEADER include/spdk/conf.h 00:02:32.497 TEST_HEADER include/spdk/config.h 00:02:32.497 TEST_HEADER include/spdk/blob.h 00:02:32.497 TEST_HEADER include/spdk/cpuset.h 00:02:32.498 TEST_HEADER include/spdk/crc16.h 00:02:32.498 TEST_HEADER include/spdk/crc64.h 00:02:32.498 TEST_HEADER include/spdk/crc32.h 00:02:32.498 TEST_HEADER include/spdk/endian.h 00:02:32.498 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:32.498 TEST_HEADER include/spdk/dif.h 00:02:32.498 TEST_HEADER include/spdk/dma.h 00:02:32.498 TEST_HEADER include/spdk/env_dpdk.h 00:02:32.498 TEST_HEADER include/spdk/env.h 00:02:32.498 TEST_HEADER include/spdk/fd_group.h 00:02:32.498 TEST_HEADER include/spdk/event.h 00:02:32.498 TEST_HEADER include/spdk/fd.h 00:02:32.498 TEST_HEADER include/spdk/ftl.h 00:02:32.498 TEST_HEADER include/spdk/file.h 00:02:32.498 TEST_HEADER include/spdk/gpt_spec.h 00:02:32.498 TEST_HEADER include/spdk/idxd.h 00:02:32.498 TEST_HEADER include/spdk/hexlify.h 00:02:32.498 TEST_HEADER include/spdk/histogram_data.h 00:02:32.498 TEST_HEADER include/spdk/init.h 00:02:32.498 TEST_HEADER include/spdk/idxd_spec.h 00:02:32.498 TEST_HEADER include/spdk/ioat.h 00:02:32.498 TEST_HEADER include/spdk/iscsi_spec.h 00:02:32.498 TEST_HEADER include/spdk/ioat_spec.h 00:02:32.498 TEST_HEADER include/spdk/json.h 00:02:32.498 TEST_HEADER include/spdk/keyring.h 00:02:32.498 TEST_HEADER include/spdk/jsonrpc.h 00:02:32.498 TEST_HEADER include/spdk/log.h 00:02:32.498 TEST_HEADER include/spdk/likely.h 00:02:32.498 TEST_HEADER include/spdk/keyring_module.h 00:02:32.498 TEST_HEADER include/spdk/lvol.h 00:02:32.498 TEST_HEADER include/spdk/mmio.h 00:02:32.498 TEST_HEADER include/spdk/nbd.h 00:02:32.498 TEST_HEADER include/spdk/memory.h 00:02:32.498 TEST_HEADER include/spdk/notify.h 00:02:32.498 TEST_HEADER include/spdk/nvme.h 00:02:32.498 TEST_HEADER include/spdk/nvme_intel.h 00:02:32.498 CC app/spdk_dd/spdk_dd.o 00:02:32.498 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:32.498 TEST_HEADER include/spdk/nvme_spec.h 00:02:32.498 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:32.498 CC app/nvmf_tgt/nvmf_main.o 00:02:32.498 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:32.498 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:32.498 TEST_HEADER include/spdk/nvme_zns.h 00:02:32.498 TEST_HEADER include/spdk/nvmf.h 00:02:32.498 TEST_HEADER include/spdk/nvmf_spec.h 00:02:32.498 CC app/iscsi_tgt/iscsi_tgt.o 00:02:32.498 TEST_HEADER include/spdk/nvmf_transport.h 00:02:32.498 CC app/spdk_tgt/spdk_tgt.o 00:02:32.498 TEST_HEADER include/spdk/opal.h 00:02:32.498 TEST_HEADER include/spdk/pci_ids.h 00:02:32.498 TEST_HEADER include/spdk/pipe.h 00:02:32.498 TEST_HEADER include/spdk/opal_spec.h 00:02:32.498 TEST_HEADER include/spdk/queue.h 00:02:32.498 TEST_HEADER include/spdk/reduce.h 00:02:32.498 TEST_HEADER include/spdk/rpc.h 00:02:32.498 TEST_HEADER include/spdk/scsi_spec.h 00:02:32.498 TEST_HEADER include/spdk/scsi.h 00:02:32.498 TEST_HEADER include/spdk/scheduler.h 00:02:32.498 TEST_HEADER include/spdk/stdinc.h 00:02:32.498 TEST_HEADER include/spdk/sock.h 00:02:32.498 TEST_HEADER include/spdk/thread.h 00:02:32.498 TEST_HEADER include/spdk/string.h 00:02:32.498 TEST_HEADER include/spdk/trace_parser.h 00:02:32.498 TEST_HEADER include/spdk/ublk.h 00:02:32.498 TEST_HEADER include/spdk/util.h 00:02:32.498 TEST_HEADER include/spdk/tree.h 00:02:32.498 TEST_HEADER include/spdk/trace.h 00:02:32.498 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:32.498 TEST_HEADER include/spdk/uuid.h 00:02:32.498 TEST_HEADER include/spdk/version.h 00:02:32.498 TEST_HEADER include/spdk/vmd.h 00:02:32.498 TEST_HEADER include/spdk/xor.h 00:02:32.498 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:32.498 TEST_HEADER include/spdk/zipf.h 00:02:32.498 TEST_HEADER include/spdk/vhost.h 00:02:32.498 CXX test/cpp_headers/accel.o 00:02:32.498 CXX test/cpp_headers/accel_module.o 00:02:32.498 CXX test/cpp_headers/assert.o 00:02:32.498 CXX test/cpp_headers/base64.o 00:02:32.498 CXX test/cpp_headers/barrier.o 00:02:32.498 CXX test/cpp_headers/bdev_zone.o 00:02:32.498 CXX test/cpp_headers/bdev.o 00:02:32.498 CXX test/cpp_headers/bdev_module.o 00:02:32.498 CXX test/cpp_headers/bit_pool.o 00:02:32.498 CXX test/cpp_headers/bit_array.o 00:02:32.498 CXX test/cpp_headers/blobfs_bdev.o 00:02:32.498 CXX test/cpp_headers/blob_bdev.o 00:02:32.498 CXX test/cpp_headers/blobfs.o 00:02:32.498 CXX test/cpp_headers/blob.o 00:02:32.498 CXX test/cpp_headers/conf.o 00:02:32.498 CXX test/cpp_headers/config.o 00:02:32.498 CXX test/cpp_headers/crc16.o 00:02:32.498 CXX test/cpp_headers/crc64.o 00:02:32.498 CXX test/cpp_headers/cpuset.o 00:02:32.498 CXX test/cpp_headers/dif.o 00:02:32.498 CXX test/cpp_headers/dma.o 00:02:32.498 CXX test/cpp_headers/crc32.o 00:02:32.498 CXX test/cpp_headers/endian.o 00:02:32.498 CXX test/cpp_headers/env.o 00:02:32.498 CXX test/cpp_headers/env_dpdk.o 00:02:32.498 CXX test/cpp_headers/event.o 00:02:32.498 CXX test/cpp_headers/fd_group.o 00:02:32.498 CXX test/cpp_headers/fd.o 00:02:32.498 CXX test/cpp_headers/ftl.o 00:02:32.498 CXX test/cpp_headers/file.o 00:02:32.498 CXX test/cpp_headers/gpt_spec.o 00:02:32.498 CXX test/cpp_headers/hexlify.o 00:02:32.498 CXX test/cpp_headers/histogram_data.o 00:02:32.498 CXX test/cpp_headers/init.o 00:02:32.498 CXX test/cpp_headers/idxd.o 00:02:32.498 CXX test/cpp_headers/idxd_spec.o 00:02:32.498 CXX test/cpp_headers/ioat_spec.o 00:02:32.498 CXX test/cpp_headers/ioat.o 00:02:32.498 CXX test/cpp_headers/iscsi_spec.o 00:02:32.498 CXX test/cpp_headers/keyring.o 00:02:32.498 CXX test/cpp_headers/jsonrpc.o 00:02:32.498 CXX test/cpp_headers/json.o 00:02:32.498 CXX test/cpp_headers/keyring_module.o 00:02:32.498 CXX test/cpp_headers/likely.o 00:02:32.498 CXX test/cpp_headers/lvol.o 00:02:32.498 CXX test/cpp_headers/log.o 00:02:32.498 CXX test/cpp_headers/memory.o 00:02:32.498 CXX test/cpp_headers/mmio.o 00:02:32.498 CXX test/cpp_headers/nbd.o 00:02:32.498 CXX test/cpp_headers/notify.o 00:02:32.498 CXX test/cpp_headers/nvme_intel.o 00:02:32.498 CXX test/cpp_headers/nvme_ocssd.o 00:02:32.498 CXX test/cpp_headers/nvme.o 00:02:32.498 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:32.498 CXX test/cpp_headers/nvme_zns.o 00:02:32.498 CXX test/cpp_headers/nvme_spec.o 00:02:32.498 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:32.498 CXX test/cpp_headers/nvmf_cmd.o 00:02:32.498 CXX test/cpp_headers/nvmf_spec.o 00:02:32.498 CXX test/cpp_headers/nvmf.o 00:02:32.498 CXX test/cpp_headers/nvmf_transport.o 00:02:32.498 CXX test/cpp_headers/opal_spec.o 00:02:32.498 CXX test/cpp_headers/opal.o 00:02:32.498 CXX test/cpp_headers/pci_ids.o 00:02:32.498 CXX test/cpp_headers/pipe.o 00:02:32.498 CXX test/cpp_headers/queue.o 00:02:32.498 CC examples/ioat/verify/verify.o 00:02:32.498 CXX test/cpp_headers/reduce.o 00:02:32.498 CC examples/ioat/perf/perf.o 00:02:32.777 CC examples/util/zipf/zipf.o 00:02:32.777 CC test/thread/poller_perf/poller_perf.o 00:02:32.777 CC test/env/pci/pci_ut.o 00:02:32.777 CC test/env/memory/memory_ut.o 00:02:32.777 CC app/fio/nvme/fio_plugin.o 00:02:32.777 CXX test/cpp_headers/rpc.o 00:02:32.777 CC test/app/jsoncat/jsoncat.o 00:02:32.777 CC test/env/vtophys/vtophys.o 00:02:32.777 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:32.777 CXX test/cpp_headers/scheduler.o 00:02:32.777 CC test/dma/test_dma/test_dma.o 00:02:32.777 CC test/app/histogram_perf/histogram_perf.o 00:02:32.777 CC test/app/stub/stub.o 00:02:32.777 CC test/app/bdev_svc/bdev_svc.o 00:02:32.777 LINK spdk_lspci 00:02:32.777 CC app/fio/bdev/fio_plugin.o 00:02:33.041 LINK rpc_client_test 00:02:33.041 LINK interrupt_tgt 00:02:33.041 LINK nvmf_tgt 00:02:33.041 LINK spdk_nvme_discover 00:02:33.041 LINK spdk_trace_record 00:02:33.041 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:33.041 CC test/env/mem_callbacks/mem_callbacks.o 00:02:33.041 LINK jsoncat 00:02:33.299 LINK vtophys 00:02:33.299 CXX test/cpp_headers/scsi.o 00:02:33.299 CXX test/cpp_headers/scsi_spec.o 00:02:33.299 CXX test/cpp_headers/sock.o 00:02:33.299 CXX test/cpp_headers/stdinc.o 00:02:33.299 CXX test/cpp_headers/string.o 00:02:33.299 CXX test/cpp_headers/thread.o 00:02:33.299 CXX test/cpp_headers/trace.o 00:02:33.299 LINK spdk_tgt 00:02:33.299 CXX test/cpp_headers/trace_parser.o 00:02:33.299 CXX test/cpp_headers/tree.o 00:02:33.299 CXX test/cpp_headers/ublk.o 00:02:33.299 LINK iscsi_tgt 00:02:33.299 LINK histogram_perf 00:02:33.299 CXX test/cpp_headers/util.o 00:02:33.299 CXX test/cpp_headers/uuid.o 00:02:33.299 CXX test/cpp_headers/version.o 00:02:33.299 LINK zipf 00:02:33.299 CXX test/cpp_headers/vfio_user_pci.o 00:02:33.299 CXX test/cpp_headers/vfio_user_spec.o 00:02:33.299 LINK poller_perf 00:02:33.299 CXX test/cpp_headers/vhost.o 00:02:33.299 CXX test/cpp_headers/vmd.o 00:02:33.299 CXX test/cpp_headers/xor.o 00:02:33.299 CXX test/cpp_headers/zipf.o 00:02:33.299 LINK ioat_perf 00:02:33.299 LINK env_dpdk_post_init 00:02:33.299 LINK stub 00:02:33.299 LINK bdev_svc 00:02:33.299 LINK verify 00:02:33.299 LINK spdk_dd 00:02:33.299 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:33.299 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:33.299 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:33.299 LINK spdk_trace 00:02:33.557 LINK pci_ut 00:02:33.557 LINK test_dma 00:02:33.557 LINK spdk_nvme 00:02:33.557 LINK spdk_nvme_perf 00:02:33.557 LINK spdk_bdev 00:02:33.557 CC examples/sock/hello_world/hello_sock.o 00:02:33.557 CC examples/vmd/lsvmd/lsvmd.o 00:02:33.557 LINK nvme_fuzz 00:02:33.557 CC examples/vmd/led/led.o 00:02:33.557 CC examples/idxd/perf/perf.o 00:02:33.557 CC examples/thread/thread/thread_ex.o 00:02:33.557 LINK spdk_nvme_identify 00:02:33.557 LINK vhost_fuzz 00:02:33.815 CC app/vhost/vhost.o 00:02:33.815 CC test/event/event_perf/event_perf.o 00:02:33.815 CC test/event/reactor_perf/reactor_perf.o 00:02:33.815 CC test/event/reactor/reactor.o 00:02:33.815 LINK mem_callbacks 00:02:33.815 CC test/event/app_repeat/app_repeat.o 00:02:33.815 LINK spdk_top 00:02:33.815 CC test/event/scheduler/scheduler.o 00:02:33.815 LINK lsvmd 00:02:33.815 LINK led 00:02:33.815 LINK hello_sock 00:02:33.815 LINK vhost 00:02:33.815 LINK event_perf 00:02:33.815 LINK reactor_perf 00:02:33.815 LINK reactor 00:02:33.815 LINK app_repeat 00:02:33.815 LINK thread 00:02:33.815 LINK idxd_perf 00:02:34.074 CC test/nvme/aer/aer.o 00:02:34.074 CC test/nvme/sgl/sgl.o 00:02:34.074 CC test/nvme/fused_ordering/fused_ordering.o 00:02:34.074 CC test/nvme/overhead/overhead.o 00:02:34.074 CC test/nvme/connect_stress/connect_stress.o 00:02:34.074 CC test/nvme/compliance/nvme_compliance.o 00:02:34.074 CC test/nvme/startup/startup.o 00:02:34.074 CC test/nvme/err_injection/err_injection.o 00:02:34.074 CC test/nvme/boot_partition/boot_partition.o 00:02:34.074 CC test/nvme/simple_copy/simple_copy.o 00:02:34.074 CC test/nvme/reserve/reserve.o 00:02:34.074 CC test/nvme/cuse/cuse.o 00:02:34.074 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:34.074 CC test/nvme/reset/reset.o 00:02:34.074 CC test/nvme/e2edp/nvme_dp.o 00:02:34.074 CC test/nvme/fdp/fdp.o 00:02:34.074 LINK memory_ut 00:02:34.074 CC test/blobfs/mkfs/mkfs.o 00:02:34.074 CC test/accel/dif/dif.o 00:02:34.074 LINK scheduler 00:02:34.074 CC test/lvol/esnap/esnap.o 00:02:34.074 LINK startup 00:02:34.074 LINK connect_stress 00:02:34.074 LINK err_injection 00:02:34.074 LINK boot_partition 00:02:34.074 LINK fused_ordering 00:02:34.074 LINK doorbell_aers 00:02:34.074 LINK reserve 00:02:34.074 LINK simple_copy 00:02:34.074 LINK mkfs 00:02:34.333 LINK reset 00:02:34.333 LINK sgl 00:02:34.333 LINK aer 00:02:34.333 LINK nvme_dp 00:02:34.333 LINK overhead 00:02:34.333 CC examples/nvme/abort/abort.o 00:02:34.333 CC examples/nvme/reconnect/reconnect.o 00:02:34.333 LINK nvme_compliance 00:02:34.333 LINK fdp 00:02:34.333 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:34.333 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:34.333 CC examples/nvme/hotplug/hotplug.o 00:02:34.333 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:34.333 CC examples/nvme/arbitration/arbitration.o 00:02:34.333 CC examples/nvme/hello_world/hello_world.o 00:02:34.333 CC examples/accel/perf/accel_perf.o 00:02:34.333 LINK dif 00:02:34.333 CC examples/blob/hello_world/hello_blob.o 00:02:34.333 CC examples/blob/cli/blobcli.o 00:02:34.333 LINK cmb_copy 00:02:34.333 LINK pmr_persistence 00:02:34.591 LINK hotplug 00:02:34.591 LINK hello_world 00:02:34.591 LINK abort 00:02:34.591 LINK arbitration 00:02:34.591 LINK reconnect 00:02:34.591 LINK hello_blob 00:02:34.591 LINK iscsi_fuzz 00:02:34.591 LINK nvme_manage 00:02:34.891 LINK accel_perf 00:02:34.891 LINK blobcli 00:02:34.891 CC test/bdev/bdevio/bdevio.o 00:02:34.891 LINK cuse 00:02:35.158 CC examples/bdev/hello_world/hello_bdev.o 00:02:35.158 LINK bdevio 00:02:35.158 CC examples/bdev/bdevperf/bdevperf.o 00:02:35.415 LINK hello_bdev 00:02:35.673 LINK bdevperf 00:02:36.241 CC examples/nvmf/nvmf/nvmf.o 00:02:36.499 LINK nvmf 00:02:37.432 LINK esnap 00:02:38.000 00:02:38.000 real 0m43.425s 00:02:38.000 user 6m29.284s 00:02:38.000 sys 3m20.827s 00:02:38.000 23:28:26 make -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:02:38.000 23:28:26 make -- common/autotest_common.sh@10 -- $ set +x 00:02:38.000 ************************************ 00:02:38.000 END TEST make 00:02:38.000 ************************************ 00:02:38.000 23:28:26 -- common/autotest_common.sh@1136 -- $ return 0 00:02:38.000 23:28:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:38.000 23:28:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:38.000 23:28:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:38.000 23:28:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.000 23:28:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:38.000 23:28:26 -- pm/common@44 -- $ pid=718489 00:02:38.000 23:28:26 -- pm/common@50 -- $ kill -TERM 718489 00:02:38.000 23:28:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.000 23:28:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:38.000 23:28:26 -- pm/common@44 -- $ pid=718491 00:02:38.000 23:28:26 -- pm/common@50 -- $ kill -TERM 718491 00:02:38.000 23:28:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.000 23:28:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:38.000 23:28:26 -- pm/common@44 -- $ pid=718493 00:02:38.000 23:28:26 -- pm/common@50 -- $ kill -TERM 718493 00:02:38.000 23:28:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.000 23:28:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:38.000 23:28:26 -- pm/common@44 -- $ pid=718516 00:02:38.000 23:28:26 -- pm/common@50 -- $ sudo -E kill -TERM 718516 00:02:38.000 23:28:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:38.000 23:28:26 -- nvmf/common.sh@7 -- # uname -s 00:02:38.000 23:28:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:38.000 23:28:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:38.000 23:28:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:38.000 23:28:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:38.000 23:28:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:38.000 23:28:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:38.000 23:28:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:38.000 23:28:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:38.000 23:28:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:38.000 23:28:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:38.000 23:28:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:38.001 23:28:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:38.001 23:28:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:38.001 23:28:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:38.001 23:28:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:38.001 23:28:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:38.001 23:28:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:38.001 23:28:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:38.001 23:28:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.001 23:28:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.001 23:28:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.001 23:28:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.001 23:28:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.001 23:28:26 -- paths/export.sh@5 -- # export PATH 00:02:38.001 23:28:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.001 23:28:26 -- nvmf/common.sh@47 -- # : 0 00:02:38.001 23:28:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:38.001 23:28:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:38.001 23:28:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:38.001 23:28:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:38.001 23:28:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:38.001 23:28:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:38.001 23:28:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:38.001 23:28:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:38.001 23:28:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:38.001 23:28:26 -- spdk/autotest.sh@32 -- # uname -s 00:02:38.001 23:28:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:38.001 23:28:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:38.001 23:28:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:38.001 23:28:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:38.001 23:28:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:38.001 23:28:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:38.001 23:28:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:38.001 23:28:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:38.001 23:28:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:38.001 23:28:26 -- spdk/autotest.sh@48 -- # udevadm_pid=777353 00:02:38.001 23:28:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:38.001 23:28:26 -- pm/common@17 -- # local monitor 00:02:38.001 23:28:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.001 23:28:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.001 23:28:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.001 23:28:26 -- pm/common@21 -- # date +%s 00:02:38.001 23:28:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.001 23:28:26 -- pm/common@21 -- # date +%s 00:02:38.001 23:28:26 -- pm/common@25 -- # sleep 1 00:02:38.001 23:28:26 -- pm/common@21 -- # date +%s 00:02:38.001 23:28:26 -- pm/common@21 -- # date +%s 00:02:38.001 23:28:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078906 00:02:38.001 23:28:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078906 00:02:38.001 23:28:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078906 00:02:38.001 23:28:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078906 00:02:38.001 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078906_collect-vmstat.pm.log 00:02:38.001 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078906_collect-cpu-load.pm.log 00:02:38.001 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078906_collect-cpu-temp.pm.log 00:02:38.001 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078906_collect-bmc-pm.bmc.pm.log 00:02:38.937 23:28:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:38.937 23:28:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:38.937 23:28:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:02:38.937 23:28:27 -- common/autotest_common.sh@10 -- # set +x 00:02:38.937 23:28:27 -- spdk/autotest.sh@59 -- # create_test_list 00:02:38.937 23:28:27 -- common/autotest_common.sh@740 -- # xtrace_disable 00:02:38.937 23:28:27 -- common/autotest_common.sh@10 -- # set +x 00:02:39.195 23:28:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:39.195 23:28:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.195 23:28:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.195 23:28:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:39.195 23:28:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:39.196 23:28:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:39.196 23:28:27 -- common/autotest_common.sh@1449 -- # uname 00:02:39.196 23:28:27 -- common/autotest_common.sh@1449 -- # '[' Linux = FreeBSD ']' 00:02:39.196 23:28:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:39.196 23:28:27 -- common/autotest_common.sh@1469 -- # uname 00:02:39.196 23:28:27 -- common/autotest_common.sh@1469 -- # [[ Linux = FreeBSD ]] 00:02:39.196 23:28:27 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:39.196 23:28:27 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:39.196 23:28:27 -- spdk/autotest.sh@72 -- # hash lcov 00:02:39.196 23:28:27 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:39.196 23:28:27 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:39.196 --rc lcov_branch_coverage=1 00:02:39.196 --rc lcov_function_coverage=1 00:02:39.196 --rc genhtml_branch_coverage=1 00:02:39.196 --rc genhtml_function_coverage=1 00:02:39.196 --rc genhtml_legend=1 00:02:39.196 --rc geninfo_all_blocks=1 00:02:39.196 ' 00:02:39.196 23:28:27 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:39.196 --rc lcov_branch_coverage=1 00:02:39.196 --rc lcov_function_coverage=1 00:02:39.196 --rc genhtml_branch_coverage=1 00:02:39.196 --rc genhtml_function_coverage=1 00:02:39.196 --rc genhtml_legend=1 00:02:39.196 --rc geninfo_all_blocks=1 00:02:39.196 ' 00:02:39.196 23:28:27 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:39.196 --rc lcov_branch_coverage=1 00:02:39.196 --rc lcov_function_coverage=1 00:02:39.196 --rc genhtml_branch_coverage=1 00:02:39.196 --rc genhtml_function_coverage=1 00:02:39.196 --rc genhtml_legend=1 00:02:39.196 --rc geninfo_all_blocks=1 00:02:39.196 --no-external' 00:02:39.196 23:28:27 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:39.196 --rc lcov_branch_coverage=1 00:02:39.196 --rc lcov_function_coverage=1 00:02:39.196 --rc genhtml_branch_coverage=1 00:02:39.196 --rc genhtml_function_coverage=1 00:02:39.196 --rc genhtml_legend=1 00:02:39.196 --rc geninfo_all_blocks=1 00:02:39.196 --no-external' 00:02:39.196 23:28:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:39.196 lcov: LCOV version 1.14 00:02:39.196 23:28:28 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:40.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:40.571 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:40.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:40.572 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:40.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:40.830 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:41.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:41.089 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:53.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:53.284 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:05.476 23:28:52 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:05.476 23:28:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:05.476 23:28:52 -- common/autotest_common.sh@10 -- # set +x 00:03:05.476 23:28:52 -- spdk/autotest.sh@91 -- # rm -f 00:03:05.476 23:28:52 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.411 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:06.411 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:06.411 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:06.670 23:28:55 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:06.670 23:28:55 -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:06.670 23:28:55 -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:06.670 23:28:55 -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:06.670 23:28:55 -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:03:06.670 23:28:55 -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:03:06.670 23:28:55 -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:03:06.670 23:28:55 -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.670 23:28:55 -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:03:06.670 23:28:55 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:06.670 23:28:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.670 23:28:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:06.670 23:28:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:06.670 23:28:55 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:06.670 23:28:55 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.670 No valid GPT data, bailing 00:03:06.670 23:28:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.670 23:28:55 -- scripts/common.sh@391 -- # pt= 00:03:06.670 23:28:55 -- scripts/common.sh@392 -- # return 1 00:03:06.670 23:28:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.670 1+0 records in 00:03:06.670 1+0 records out 00:03:06.670 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00195936 s, 535 MB/s 00:03:06.670 23:28:55 -- spdk/autotest.sh@118 -- # sync 00:03:06.670 23:28:55 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.670 23:28:55 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.670 23:28:55 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:11.980 23:29:00 -- spdk/autotest.sh@124 -- # uname -s 00:03:11.980 23:29:00 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:11.980 23:29:00 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:11.980 23:29:00 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:11.980 23:29:00 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:11.980 23:29:00 -- common/autotest_common.sh@10 -- # set +x 00:03:11.980 ************************************ 00:03:11.980 START TEST setup.sh 00:03:11.980 ************************************ 00:03:11.980 23:29:00 setup.sh -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:11.980 * Looking for test storage... 00:03:11.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.980 23:29:00 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:11.980 23:29:00 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:11.980 23:29:00 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:11.980 23:29:00 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:11.980 23:29:00 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:11.980 23:29:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.980 ************************************ 00:03:11.980 START TEST acl 00:03:11.980 ************************************ 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:11.980 * Looking for test storage... 00:03:11.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:11.980 23:29:00 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.980 23:29:00 setup.sh.acl -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:03:11.980 23:29:00 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:11.980 23:29:00 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:11.980 23:29:00 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:11.980 23:29:00 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:11.980 23:29:00 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:11.980 23:29:00 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.980 23:29:00 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.265 23:29:04 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:15.265 23:29:04 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:15.265 23:29:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:15.265 23:29:04 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:15.265 23:29:04 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.265 23:29:04 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:17.796 Hugepages 00:03:17.796 node hugesize free / total 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 00:03:17.796 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:17.796 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:18.054 23:29:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:18.054 23:29:06 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:18.054 23:29:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:18.054 23:29:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.054 ************************************ 00:03:18.054 START TEST denied 00:03:18.054 ************************************ 00:03:18.054 23:29:06 setup.sh.acl.denied -- common/autotest_common.sh@1117 -- # denied 00:03:18.054 23:29:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:18.054 23:29:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:18.054 23:29:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.054 23:29:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:18.054 23:29:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:20.577 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:20.577 23:29:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:24.756 00:03:24.756 real 0m6.231s 00:03:24.756 user 0m1.919s 00:03:24.756 sys 0m3.493s 00:03:24.756 23:29:13 setup.sh.acl.denied -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:24.756 23:29:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:24.756 ************************************ 00:03:24.756 END TEST denied 00:03:24.756 ************************************ 00:03:24.756 23:29:13 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:03:24.756 23:29:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:24.756 23:29:13 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:24.756 23:29:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:24.756 23:29:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.756 ************************************ 00:03:24.756 START TEST allowed 00:03:24.756 ************************************ 00:03:24.756 23:29:13 setup.sh.acl.allowed -- common/autotest_common.sh@1117 -- # allowed 00:03:24.756 23:29:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:24.756 23:29:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:24.756 23:29:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:24.756 23:29:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.756 23:29:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.040 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.040 23:29:16 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:28.040 23:29:16 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:28.040 23:29:16 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:28.040 23:29:16 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:28.040 23:29:16 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.324 00:03:31.324 real 0m6.517s 00:03:31.324 user 0m1.929s 00:03:31.324 sys 0m3.677s 00:03:31.324 23:29:19 setup.sh.acl.allowed -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.324 23:29:19 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:31.324 ************************************ 00:03:31.324 END TEST allowed 00:03:31.324 ************************************ 00:03:31.324 23:29:19 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:03:31.324 00:03:31.324 real 0m18.928s 00:03:31.324 user 0m6.200s 00:03:31.324 sys 0m11.193s 00:03:31.324 23:29:19 setup.sh.acl -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:31.324 23:29:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.324 ************************************ 00:03:31.324 END TEST acl 00:03:31.324 ************************************ 00:03:31.324 23:29:19 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:03:31.324 23:29:19 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.324 23:29:19 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:31.324 23:29:19 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:31.325 23:29:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:31.325 ************************************ 00:03:31.325 START TEST hugepages 00:03:31.325 ************************************ 00:03:31.325 23:29:19 setup.sh.hugepages -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:31.325 * Looking for test storage... 00:03:31.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 173369548 kB' 'MemAvailable: 176242364 kB' 'Buffers: 3896 kB' 'Cached: 10146628 kB' 'SwapCached: 0 kB' 'Active: 7162848 kB' 'Inactive: 3507524 kB' 'Active(anon): 6770840 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523096 kB' 'Mapped: 190040 kB' 'Shmem: 6250992 kB' 'KReclaimable: 235472 kB' 'Slab: 824968 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 589496 kB' 'KernelStack: 20480 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982028 kB' 'Committed_AS: 8305780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315436 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.325 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:03:31.326 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:31.327 23:29:19 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:03:31.327 23:29:19 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:31.327 23:29:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:31.327 23:29:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.327 ************************************ 00:03:31.327 START TEST single_node_setup 00:03:31.327 ************************************ 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1117 -- # single_node_setup 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:31.327 23:29:19 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.327 23:29:20 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:33.894 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:33.894 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:34.178 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:34.178 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:34.178 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.746 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175506028 kB' 'MemAvailable: 178378844 kB' 'Buffers: 3896 kB' 'Cached: 10146724 kB' 'SwapCached: 0 kB' 'Active: 7184628 kB' 'Inactive: 3507524 kB' 'Active(anon): 6792620 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544580 kB' 'Mapped: 190848 kB' 'Shmem: 6251088 kB' 'KReclaimable: 235472 kB' 'Slab: 824052 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 588580 kB' 'KernelStack: 20832 kB' 'PageTables: 9832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8325500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315536 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.009 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175504864 kB' 'MemAvailable: 178377680 kB' 'Buffers: 3896 kB' 'Cached: 10146724 kB' 'SwapCached: 0 kB' 'Active: 7179812 kB' 'Inactive: 3507524 kB' 'Active(anon): 6787804 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539676 kB' 'Mapped: 190336 kB' 'Shmem: 6251088 kB' 'KReclaimable: 235472 kB' 'Slab: 823816 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 588344 kB' 'KernelStack: 20736 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8320896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315548 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.010 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.011 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.012 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175504472 kB' 'MemAvailable: 178377288 kB' 'Buffers: 3896 kB' 'Cached: 10146724 kB' 'SwapCached: 0 kB' 'Active: 7180072 kB' 'Inactive: 3507524 kB' 'Active(anon): 6788064 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539840 kB' 'Mapped: 190060 kB' 'Shmem: 6251088 kB' 'KReclaimable: 235472 kB' 'Slab: 823816 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 588344 kB' 'KernelStack: 20848 kB' 'PageTables: 9760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8320672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315612 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.013 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.014 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:35.015 nr_hugepages=1024 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:35.015 resv_hugepages=0 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:35.015 surplus_hugepages=0 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:35.015 anon_hugepages=0 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.015 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175504084 kB' 'MemAvailable: 178376900 kB' 'Buffers: 3896 kB' 'Cached: 10146768 kB' 'SwapCached: 0 kB' 'Active: 7178532 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786524 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538644 kB' 'Mapped: 189920 kB' 'Shmem: 6251132 kB' 'KReclaimable: 235472 kB' 'Slab: 823976 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 588504 kB' 'KernelStack: 20544 kB' 'PageTables: 9032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8320576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.016 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.017 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.018 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85092924 kB' 'MemUsed: 12569760 kB' 'SwapCached: 0 kB' 'Active: 5563948 kB' 'Inactive: 3335416 kB' 'Active(anon): 5406408 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8686528 kB' 'Mapped: 75304 kB' 'AnonPages: 216472 kB' 'Shmem: 5193572 kB' 'KernelStack: 12424 kB' 'PageTables: 5616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132312 kB' 'Slab: 423592 kB' 'SReclaimable: 132312 kB' 'SUnreclaim: 291280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.019 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.020 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:35.021 node0=1024 expecting 1024 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:35.021 00:03:35.021 real 0m3.945s 00:03:35.021 user 0m1.284s 00:03:35.021 sys 0m1.917s 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:35.021 23:29:23 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:03:35.021 ************************************ 00:03:35.021 END TEST single_node_setup 00:03:35.021 ************************************ 00:03:35.021 23:29:23 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:35.021 23:29:23 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:03:35.021 23:29:23 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:35.280 23:29:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:35.280 23:29:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.280 ************************************ 00:03:35.280 START TEST even_2G_alloc 00:03:35.280 ************************************ 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1117 -- # even_2G_alloc 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:35.280 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.281 23:29:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.820 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:37.820 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.820 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175509992 kB' 'MemAvailable: 178382808 kB' 'Buffers: 3896 kB' 'Cached: 10146876 kB' 'SwapCached: 0 kB' 'Active: 7180408 kB' 'Inactive: 3507524 kB' 'Active(anon): 6788400 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539908 kB' 'Mapped: 190076 kB' 'Shmem: 6251240 kB' 'KReclaimable: 235472 kB' 'Slab: 823320 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587848 kB' 'KernelStack: 20576 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8318924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315628 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:37.820 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.083 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175512700 kB' 'MemAvailable: 178385516 kB' 'Buffers: 3896 kB' 'Cached: 10146880 kB' 'SwapCached: 0 kB' 'Active: 7179252 kB' 'Inactive: 3507524 kB' 'Active(anon): 6787244 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538700 kB' 'Mapped: 190060 kB' 'Shmem: 6251244 kB' 'KReclaimable: 235472 kB' 'Slab: 823760 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 588288 kB' 'KernelStack: 20480 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8318944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.084 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175513356 kB' 'MemAvailable: 178386172 kB' 'Buffers: 3896 kB' 'Cached: 10146880 kB' 'SwapCached: 0 kB' 'Active: 7178472 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786464 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538380 kB' 'Mapped: 189984 kB' 'Shmem: 6251244 kB' 'KReclaimable: 235472 kB' 'Slab: 823728 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 588256 kB' 'KernelStack: 20480 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8318964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.085 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:38.086 nr_hugepages=1024 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:38.086 resv_hugepages=0 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:38.086 surplus_hugepages=0 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:38.086 anon_hugepages=0 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175515036 kB' 'MemAvailable: 178387852 kB' 'Buffers: 3896 kB' 'Cached: 10146936 kB' 'SwapCached: 0 kB' 'Active: 7178476 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786468 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538312 kB' 'Mapped: 189984 kB' 'Shmem: 6251300 kB' 'KReclaimable: 235472 kB' 'Slab: 823728 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 588256 kB' 'KernelStack: 20464 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8318988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.086 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86154296 kB' 'MemUsed: 11508388 kB' 'SwapCached: 0 kB' 'Active: 5564572 kB' 'Inactive: 3335416 kB' 'Active(anon): 5407032 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8686572 kB' 'Mapped: 75296 kB' 'AnonPages: 216524 kB' 'Shmem: 5193616 kB' 'KernelStack: 12152 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132312 kB' 'Slab: 423388 kB' 'SReclaimable: 132312 kB' 'SUnreclaim: 291076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.087 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 89362168 kB' 'MemUsed: 4356300 kB' 'SwapCached: 0 kB' 'Active: 1614532 kB' 'Inactive: 172108 kB' 'Active(anon): 1380064 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1464280 kB' 'Mapped: 114688 kB' 'AnonPages: 322464 kB' 'Shmem: 1057704 kB' 'KernelStack: 8328 kB' 'PageTables: 3972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103160 kB' 'Slab: 400340 kB' 'SReclaimable: 103160 kB' 'SUnreclaim: 297180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.088 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:38.089 node0=512 expecting 512 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:38.089 node1=512 expecting 512 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:03:38.089 00:03:38.089 real 0m2.960s 00:03:38.089 user 0m1.220s 00:03:38.089 sys 0m1.783s 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:38.089 23:29:26 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:38.089 ************************************ 00:03:38.089 END TEST even_2G_alloc 00:03:38.089 ************************************ 00:03:38.089 23:29:26 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:38.089 23:29:26 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:03:38.089 23:29:26 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:38.089 23:29:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:38.089 23:29:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:38.089 ************************************ 00:03:38.089 START TEST odd_alloc 00:03:38.089 ************************************ 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1117 -- # odd_alloc 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.089 23:29:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.621 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.621 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.621 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175534728 kB' 'MemAvailable: 178407544 kB' 'Buffers: 3896 kB' 'Cached: 10147024 kB' 'SwapCached: 0 kB' 'Active: 7178768 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786760 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538240 kB' 'Mapped: 189020 kB' 'Shmem: 6251388 kB' 'KReclaimable: 235472 kB' 'Slab: 823424 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587952 kB' 'KernelStack: 20640 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8311508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315468 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.621 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.622 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175534832 kB' 'MemAvailable: 178407648 kB' 'Buffers: 3896 kB' 'Cached: 10147028 kB' 'SwapCached: 0 kB' 'Active: 7177808 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785800 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537716 kB' 'Mapped: 188920 kB' 'Shmem: 6251392 kB' 'KReclaimable: 235472 kB' 'Slab: 823456 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587984 kB' 'KernelStack: 20576 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8311656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315420 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.623 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175535224 kB' 'MemAvailable: 178408040 kB' 'Buffers: 3896 kB' 'Cached: 10147044 kB' 'SwapCached: 0 kB' 'Active: 7177820 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785812 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537724 kB' 'Mapped: 188920 kB' 'Shmem: 6251408 kB' 'KReclaimable: 235472 kB' 'Slab: 823456 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587984 kB' 'KernelStack: 20576 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8311684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315420 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.624 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.625 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:03:40.626 nr_hugepages=1025 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:40.626 resv_hugepages=0 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:40.626 surplus_hugepages=0 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:40.626 anon_hugepages=0 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175535224 kB' 'MemAvailable: 178408040 kB' 'Buffers: 3896 kB' 'Cached: 10147044 kB' 'SwapCached: 0 kB' 'Active: 7177820 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785812 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537724 kB' 'Mapped: 188920 kB' 'Shmem: 6251408 kB' 'KReclaimable: 235472 kB' 'Slab: 823456 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587984 kB' 'KernelStack: 20576 kB' 'PageTables: 9048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029580 kB' 'Committed_AS: 8311708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315420 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.626 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.627 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.889 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.890 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86178208 kB' 'MemUsed: 11484476 kB' 'SwapCached: 0 kB' 'Active: 5562192 kB' 'Inactive: 3335416 kB' 'Active(anon): 5404652 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8686584 kB' 'Mapped: 74980 kB' 'AnonPages: 214196 kB' 'Shmem: 5193628 kB' 'KernelStack: 12200 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132312 kB' 'Slab: 423432 kB' 'SReclaimable: 132312 kB' 'SUnreclaim: 291120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.891 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 89357636 kB' 'MemUsed: 4360832 kB' 'SwapCached: 0 kB' 'Active: 1615916 kB' 'Inactive: 172108 kB' 'Active(anon): 1381448 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1464408 kB' 'Mapped: 113940 kB' 'AnonPages: 323800 kB' 'Shmem: 1057832 kB' 'KernelStack: 8408 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103160 kB' 'Slab: 400024 kB' 'SReclaimable: 103160 kB' 'SUnreclaim: 296864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.892 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.893 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:03:40.894 node0=513 expecting 513 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:03:40.894 node1=512 expecting 512 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:40.894 00:03:40.894 real 0m2.633s 00:03:40.894 user 0m1.009s 00:03:40.894 sys 0m1.640s 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:40.894 23:29:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.894 ************************************ 00:03:40.894 END TEST odd_alloc 00:03:40.894 ************************************ 00:03:40.894 23:29:29 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:40.894 23:29:29 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:03:40.894 23:29:29 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:40.894 23:29:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:40.894 23:29:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.894 ************************************ 00:03:40.894 START TEST custom_alloc 00:03:40.894 ************************************ 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1117 -- # custom_alloc 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.894 23:29:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.433 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:43.433 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.433 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.697 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.697 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.697 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.697 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174479932 kB' 'MemAvailable: 177352748 kB' 'Buffers: 3896 kB' 'Cached: 10147180 kB' 'SwapCached: 0 kB' 'Active: 7178560 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786552 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537840 kB' 'Mapped: 189016 kB' 'Shmem: 6251544 kB' 'KReclaimable: 235472 kB' 'Slab: 823076 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587604 kB' 'KernelStack: 20464 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8312708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.698 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174481444 kB' 'MemAvailable: 177354260 kB' 'Buffers: 3896 kB' 'Cached: 10147184 kB' 'SwapCached: 0 kB' 'Active: 7178280 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786272 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537580 kB' 'Mapped: 189016 kB' 'Shmem: 6251548 kB' 'KReclaimable: 235472 kB' 'Slab: 823076 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587604 kB' 'KernelStack: 20464 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8312728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.699 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.700 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.701 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174481868 kB' 'MemAvailable: 177354684 kB' 'Buffers: 3896 kB' 'Cached: 10147184 kB' 'SwapCached: 0 kB' 'Active: 7177784 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785776 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537556 kB' 'Mapped: 188940 kB' 'Shmem: 6251548 kB' 'KReclaimable: 235472 kB' 'Slab: 823048 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587576 kB' 'KernelStack: 20464 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8312748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.702 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.703 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:03:43.704 nr_hugepages=1536 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:43.704 resv_hugepages=0 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:43.704 surplus_hugepages=0 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:43.704 anon_hugepages=0 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.704 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 174482652 kB' 'MemAvailable: 177355468 kB' 'Buffers: 3896 kB' 'Cached: 10147240 kB' 'SwapCached: 0 kB' 'Active: 7177492 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785484 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537176 kB' 'Mapped: 188940 kB' 'Shmem: 6251604 kB' 'KReclaimable: 235472 kB' 'Slab: 823048 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587576 kB' 'KernelStack: 20448 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506316 kB' 'Committed_AS: 8312768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315516 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.705 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.968 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86176908 kB' 'MemUsed: 11485776 kB' 'SwapCached: 0 kB' 'Active: 5562128 kB' 'Inactive: 3335416 kB' 'Active(anon): 5404588 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8686716 kB' 'Mapped: 74988 kB' 'AnonPages: 214096 kB' 'Shmem: 5193760 kB' 'KernelStack: 12152 kB' 'PageTables: 4756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132312 kB' 'Slab: 422952 kB' 'SReclaimable: 132312 kB' 'SUnreclaim: 290640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.969 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718468 kB' 'MemFree: 88306112 kB' 'MemUsed: 5412356 kB' 'SwapCached: 0 kB' 'Active: 1615736 kB' 'Inactive: 172108 kB' 'Active(anon): 1381268 kB' 'Inactive(anon): 0 kB' 'Active(file): 234468 kB' 'Inactive(file): 172108 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1464424 kB' 'Mapped: 113952 kB' 'AnonPages: 323472 kB' 'Shmem: 1057848 kB' 'KernelStack: 8312 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 103160 kB' 'Slab: 400096 kB' 'SReclaimable: 103160 kB' 'SUnreclaim: 296936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.970 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:03:43.971 node0=512 expecting 512 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:03:43.971 node1=1024 expecting 1024 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:43.971 00:03:43.971 real 0m2.992s 00:03:43.971 user 0m1.199s 00:03:43.971 sys 0m1.830s 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:43.971 23:29:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:43.971 ************************************ 00:03:43.971 END TEST custom_alloc 00:03:43.971 ************************************ 00:03:43.971 23:29:32 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:43.971 23:29:32 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:43.971 23:29:32 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:43.971 23:29:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:43.971 23:29:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:43.971 ************************************ 00:03:43.971 START TEST no_shrink_alloc 00:03:43.971 ************************************ 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1117 -- # no_shrink_alloc 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:03:43.971 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.972 23:29:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.506 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:46.506 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:46.506 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175512568 kB' 'MemAvailable: 178385384 kB' 'Buffers: 3896 kB' 'Cached: 10147328 kB' 'SwapCached: 0 kB' 'Active: 7179164 kB' 'Inactive: 3507524 kB' 'Active(anon): 6787156 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538192 kB' 'Mapped: 189048 kB' 'Shmem: 6251692 kB' 'KReclaimable: 235472 kB' 'Slab: 822984 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587512 kB' 'KernelStack: 20432 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8313252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315452 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.506 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.507 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.508 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175514564 kB' 'MemAvailable: 178387380 kB' 'Buffers: 3896 kB' 'Cached: 10147332 kB' 'SwapCached: 0 kB' 'Active: 7178876 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786868 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537936 kB' 'Mapped: 189032 kB' 'Shmem: 6251696 kB' 'KReclaimable: 235472 kB' 'Slab: 822976 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587504 kB' 'KernelStack: 20432 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8313268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315404 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.771 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.772 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175515232 kB' 'MemAvailable: 178388048 kB' 'Buffers: 3896 kB' 'Cached: 10147352 kB' 'SwapCached: 0 kB' 'Active: 7178396 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786388 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537908 kB' 'Mapped: 188956 kB' 'Shmem: 6251716 kB' 'KReclaimable: 235472 kB' 'Slab: 822964 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587492 kB' 'KernelStack: 20432 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8313292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315404 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.773 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:46.774 nr_hugepages=1024 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:46.774 resv_hugepages=0 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:46.774 surplus_hugepages=0 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:46.774 anon_hugepages=0 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:46.774 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175515232 kB' 'MemAvailable: 178388048 kB' 'Buffers: 3896 kB' 'Cached: 10147372 kB' 'SwapCached: 0 kB' 'Active: 7177976 kB' 'Inactive: 3507524 kB' 'Active(anon): 6785968 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537456 kB' 'Mapped: 188956 kB' 'Shmem: 6251736 kB' 'KReclaimable: 235472 kB' 'Slab: 822964 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587492 kB' 'KernelStack: 20416 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8313312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315404 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.775 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.776 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85121652 kB' 'MemUsed: 12541032 kB' 'SwapCached: 0 kB' 'Active: 5562320 kB' 'Inactive: 3335416 kB' 'Active(anon): 5404780 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8686836 kB' 'Mapped: 74988 kB' 'AnonPages: 214072 kB' 'Shmem: 5193880 kB' 'KernelStack: 12136 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132312 kB' 'Slab: 422968 kB' 'SReclaimable: 132312 kB' 'SUnreclaim: 290656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.777 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:46.778 node0=1024 expecting 1024 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.778 23:29:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.340 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:49.340 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.340 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.341 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.341 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.341 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.341 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175534424 kB' 'MemAvailable: 178407240 kB' 'Buffers: 3896 kB' 'Cached: 10147460 kB' 'SwapCached: 0 kB' 'Active: 7179096 kB' 'Inactive: 3507524 kB' 'Active(anon): 6787088 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538420 kB' 'Mapped: 188972 kB' 'Shmem: 6251824 kB' 'KReclaimable: 235472 kB' 'Slab: 823164 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587692 kB' 'KernelStack: 20432 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8313968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.341 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.608 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175533516 kB' 'MemAvailable: 178406332 kB' 'Buffers: 3896 kB' 'Cached: 10147464 kB' 'SwapCached: 0 kB' 'Active: 7178952 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786944 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538280 kB' 'Mapped: 189016 kB' 'Shmem: 6251828 kB' 'KReclaimable: 235472 kB' 'Slab: 823232 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587760 kB' 'KernelStack: 20448 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8313984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.609 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.610 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175533516 kB' 'MemAvailable: 178406332 kB' 'Buffers: 3896 kB' 'Cached: 10147484 kB' 'SwapCached: 0 kB' 'Active: 7178984 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786976 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538288 kB' 'Mapped: 189016 kB' 'Shmem: 6251848 kB' 'KReclaimable: 235472 kB' 'Slab: 823232 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587760 kB' 'KernelStack: 20448 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8314008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315564 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.611 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.612 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:03:49.613 nr_hugepages=1024 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:03:49.613 resv_hugepages=0 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:03:49.613 surplus_hugepages=0 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:03:49.613 anon_hugepages=0 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381152 kB' 'MemFree: 175533268 kB' 'MemAvailable: 178406084 kB' 'Buffers: 3896 kB' 'Cached: 10147524 kB' 'SwapCached: 0 kB' 'Active: 7178660 kB' 'Inactive: 3507524 kB' 'Active(anon): 6786652 kB' 'Inactive(anon): 0 kB' 'Active(file): 392008 kB' 'Inactive(file): 3507524 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537912 kB' 'Mapped: 189016 kB' 'Shmem: 6251888 kB' 'KReclaimable: 235472 kB' 'Slab: 823232 kB' 'SReclaimable: 235472 kB' 'SUnreclaim: 587760 kB' 'KernelStack: 20432 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030604 kB' 'Committed_AS: 8314028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315580 kB' 'VmallocChunk: 0 kB' 'Percpu: 79104 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3056596 kB' 'DirectMap2M: 16545792 kB' 'DirectMap1G: 182452224 kB' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.613 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.614 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 85141644 kB' 'MemUsed: 12521040 kB' 'SwapCached: 0 kB' 'Active: 5562396 kB' 'Inactive: 3335416 kB' 'Active(anon): 5404856 kB' 'Inactive(anon): 0 kB' 'Active(file): 157540 kB' 'Inactive(file): 3335416 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8686944 kB' 'Mapped: 75032 kB' 'AnonPages: 213980 kB' 'Shmem: 5193988 kB' 'KernelStack: 12136 kB' 'PageTables: 4668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 132312 kB' 'Slab: 423232 kB' 'SReclaimable: 132312 kB' 'SUnreclaim: 290920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.615 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:03:49.616 node0=1024 expecting 1024 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.616 00:03:49.616 real 0m5.667s 00:03:49.616 user 0m2.276s 00:03:49.616 sys 0m3.489s 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:49.616 23:29:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:49.616 ************************************ 00:03:49.616 END TEST no_shrink_alloc 00:03:49.616 ************************************ 00:03:49.616 23:29:38 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:03:49.616 23:29:38 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:03:49.616 00:03:49.616 real 0m18.668s 00:03:49.616 user 0m7.200s 00:03:49.616 sys 0m10.953s 00:03:49.616 23:29:38 setup.sh.hugepages -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:49.616 23:29:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.616 ************************************ 00:03:49.616 END TEST hugepages 00:03:49.616 ************************************ 00:03:49.616 23:29:38 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:03:49.616 23:29:38 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:49.616 23:29:38 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:49.616 23:29:38 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:49.616 23:29:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:49.616 ************************************ 00:03:49.616 START TEST driver 00:03:49.616 ************************************ 00:03:49.617 23:29:38 setup.sh.driver -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:49.876 * Looking for test storage... 00:03:49.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:49.876 23:29:38 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:49.876 23:29:38 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.876 23:29:38 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.109 23:29:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:54.109 23:29:42 setup.sh.driver -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:54.109 23:29:42 setup.sh.driver -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:54.109 23:29:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:54.109 ************************************ 00:03:54.109 START TEST guess_driver 00:03:54.109 ************************************ 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1117 -- # guess_driver 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:54.109 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:54.109 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:54.109 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:54.109 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:54.109 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:54.109 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:54.109 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:54.109 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:54.110 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:54.110 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:54.110 Looking for driver=vfio-pci 00:03:54.110 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.110 23:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:54.110 23:29:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.110 23:29:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.015 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.274 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.274 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.274 23:29:44 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.274 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:56.839 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:56.839 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:56.839 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.098 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:57.098 23:29:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:57.098 23:29:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.098 23:29:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.287 00:04:01.287 real 0m7.366s 00:04:01.287 user 0m2.073s 00:04:01.287 sys 0m3.791s 00:04:01.287 23:29:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:01.287 23:29:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.287 ************************************ 00:04:01.287 END TEST guess_driver 00:04:01.287 ************************************ 00:04:01.287 23:29:49 setup.sh.driver -- common/autotest_common.sh@1136 -- # return 0 00:04:01.287 00:04:01.287 real 0m11.172s 00:04:01.287 user 0m3.109s 00:04:01.287 sys 0m5.780s 00:04:01.287 23:29:49 setup.sh.driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:01.287 23:29:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:01.287 ************************************ 00:04:01.287 END TEST driver 00:04:01.287 ************************************ 00:04:01.287 23:29:49 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:01.287 23:29:49 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:01.287 23:29:49 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:01.287 23:29:49 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:01.287 23:29:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.287 ************************************ 00:04:01.287 START TEST devices 00:04:01.287 ************************************ 00:04:01.287 23:29:49 setup.sh.devices -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:01.287 * Looking for test storage... 00:04:01.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:01.287 23:29:49 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:01.287 23:29:49 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:01.287 23:29:49 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.287 23:29:49 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1664 -- # local nvme bdf 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:03.818 23:29:52 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:03.818 No valid GPT data, bailing 00:04:03.818 23:29:52 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:03.818 23:29:52 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:03.818 23:29:52 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:03.818 23:29:52 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:03.818 23:29:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.818 ************************************ 00:04:03.818 START TEST nvme_mount 00:04:03.818 ************************************ 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1117 -- # nvme_mount 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.818 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.819 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.819 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:03.819 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.819 23:29:52 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:04.755 Creating new GPT entries in memory. 00:04:04.755 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.755 other utilities. 00:04:04.755 23:29:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.755 23:29:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.755 23:29:53 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.755 23:29:53 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.755 23:29:53 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:05.689 Creating new GPT entries in memory. 00:04:05.689 The operation has completed successfully. 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 807641 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.689 23:29:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:08.218 23:29:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.218 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.218 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.478 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:08.478 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:08.478 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.478 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.478 23:29:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.014 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.015 23:29:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.274 23:30:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.809 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.809 00:04:13.809 real 0m10.193s 00:04:13.809 user 0m2.923s 00:04:13.809 sys 0m4.989s 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:13.809 23:30:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.809 ************************************ 00:04:13.809 END TEST nvme_mount 00:04:13.809 ************************************ 00:04:13.809 23:30:02 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:04:13.809 23:30:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:13.809 23:30:02 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:13.809 23:30:02 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:13.809 23:30:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.809 ************************************ 00:04:13.809 START TEST dm_mount 00:04:13.809 ************************************ 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1117 -- # dm_mount 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.809 23:30:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:15.265 Creating new GPT entries in memory. 00:04:15.265 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.265 other utilities. 00:04:15.265 23:30:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.265 23:30:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.265 23:30:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.265 23:30:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.265 23:30:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:15.835 Creating new GPT entries in memory. 00:04:15.835 The operation has completed successfully. 00:04:15.835 23:30:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:15.835 23:30:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.835 23:30:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.835 23:30:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.835 23:30:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:17.216 The operation has completed successfully. 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 811944 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.216 23:30:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.754 23:30:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:22.291 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:22.291 00:04:22.291 real 0m8.156s 00:04:22.291 user 0m1.805s 00:04:22.291 sys 0m3.279s 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:22.291 23:30:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:22.291 ************************************ 00:04:22.291 END TEST dm_mount 00:04:22.291 ************************************ 00:04:22.291 23:30:10 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:04:22.291 23:30:10 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:22.291 23:30:10 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:22.291 23:30:10 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.291 23:30:10 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.291 23:30:10 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:22.291 23:30:10 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.291 23:30:10 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:22.291 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:22.291 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:22.291 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:22.291 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:22.291 23:30:11 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:22.291 23:30:11 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:22.291 23:30:11 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:22.291 23:30:11 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:22.291 23:30:11 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:22.291 23:30:11 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:22.291 23:30:11 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:22.291 00:04:22.291 real 0m21.407s 00:04:22.291 user 0m5.669s 00:04:22.291 sys 0m10.056s 00:04:22.291 23:30:11 setup.sh.devices -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:22.291 23:30:11 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.291 ************************************ 00:04:22.291 END TEST devices 00:04:22.291 ************************************ 00:04:22.291 23:30:11 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:22.291 00:04:22.291 real 1m10.542s 00:04:22.291 user 0m22.328s 00:04:22.291 sys 0m38.227s 00:04:22.291 23:30:11 setup.sh -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:22.291 23:30:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.291 ************************************ 00:04:22.291 END TEST setup.sh 00:04:22.291 ************************************ 00:04:22.550 23:30:11 -- common/autotest_common.sh@1136 -- # return 0 00:04:22.550 23:30:11 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:25.083 Hugepages 00:04:25.083 node hugesize free / total 00:04:25.083 node0 1048576kB 0 / 0 00:04:25.083 node0 2048kB 1024 / 1024 00:04:25.083 node1 1048576kB 0 / 0 00:04:25.083 node1 2048kB 1024 / 1024 00:04:25.083 00:04:25.083 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:25.083 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:25.083 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:25.083 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:25.083 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:25.083 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:25.083 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:25.083 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:25.083 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:25.083 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:25.342 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:25.342 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:25.342 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:25.342 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:25.342 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:25.342 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:25.342 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:25.342 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:25.342 23:30:14 -- spdk/autotest.sh@130 -- # uname -s 00:04:25.342 23:30:14 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:25.342 23:30:14 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:25.342 23:30:14 -- common/autotest_common.sh@1525 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.877 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.877 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.877 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.877 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.878 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.812 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.812 23:30:17 -- common/autotest_common.sh@1526 -- # sleep 1 00:04:29.746 23:30:18 -- common/autotest_common.sh@1527 -- # bdfs=() 00:04:29.746 23:30:18 -- common/autotest_common.sh@1527 -- # local bdfs 00:04:29.746 23:30:18 -- common/autotest_common.sh@1528 -- # bdfs=($(get_nvme_bdfs)) 00:04:29.746 23:30:18 -- common/autotest_common.sh@1528 -- # get_nvme_bdfs 00:04:29.746 23:30:18 -- common/autotest_common.sh@1507 -- # bdfs=() 00:04:29.746 23:30:18 -- common/autotest_common.sh@1507 -- # local bdfs 00:04:29.746 23:30:18 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.746 23:30:18 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.746 23:30:18 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:04:30.005 23:30:18 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:04:30.005 23:30:18 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5e:00.0 00:04:30.005 23:30:18 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:32.537 Waiting for block devices as requested 00:04:32.537 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:32.796 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:32.796 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:32.796 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:32.796 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:33.055 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:33.055 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:33.055 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:33.055 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:33.314 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:33.314 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:33.314 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:33.573 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:33.573 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:33.573 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:33.831 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:33.831 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:33.832 23:30:22 -- common/autotest_common.sh@1532 -- # for bdf in "${bdfs[@]}" 00:04:33.832 23:30:22 -- common/autotest_common.sh@1533 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1496 -- # readlink -f /sys/class/nvme/nvme0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1496 -- # grep 0000:5e:00.0/nvme/nvme 00:04:33.832 23:30:22 -- common/autotest_common.sh@1496 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1497 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:33.832 23:30:22 -- common/autotest_common.sh@1501 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1501 -- # printf '%s\n' nvme0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1533 -- # nvme_ctrlr=/dev/nvme0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1534 -- # [[ -z /dev/nvme0 ]] 00:04:33.832 23:30:22 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1539 -- # grep oacs 00:04:33.832 23:30:22 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:33.832 23:30:22 -- common/autotest_common.sh@1539 -- # oacs=' 0xe' 00:04:33.832 23:30:22 -- common/autotest_common.sh@1540 -- # oacs_ns_manage=8 00:04:33.832 23:30:22 -- common/autotest_common.sh@1542 -- # [[ 8 -ne 0 ]] 00:04:33.832 23:30:22 -- common/autotest_common.sh@1548 -- # nvme id-ctrl /dev/nvme0 00:04:33.832 23:30:22 -- common/autotest_common.sh@1548 -- # grep unvmcap 00:04:33.832 23:30:22 -- common/autotest_common.sh@1548 -- # cut -d: -f2 00:04:33.832 23:30:22 -- common/autotest_common.sh@1548 -- # unvmcap=' 0' 00:04:33.832 23:30:22 -- common/autotest_common.sh@1549 -- # [[ 0 -eq 0 ]] 00:04:33.832 23:30:22 -- common/autotest_common.sh@1551 -- # continue 00:04:33.832 23:30:22 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:33.832 23:30:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.832 23:30:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.832 23:30:22 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:33.832 23:30:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:33.832 23:30:22 -- common/autotest_common.sh@10 -- # set +x 00:04:33.832 23:30:22 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.437 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:36.437 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:37.373 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:37.373 23:30:26 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:37.373 23:30:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.373 23:30:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.374 23:30:26 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:37.374 23:30:26 -- common/autotest_common.sh@1585 -- # mapfile -t bdfs 00:04:37.374 23:30:26 -- common/autotest_common.sh@1585 -- # get_nvme_bdfs_by_id 0x0a54 00:04:37.374 23:30:26 -- common/autotest_common.sh@1571 -- # bdfs=() 00:04:37.374 23:30:26 -- common/autotest_common.sh@1571 -- # local bdfs 00:04:37.374 23:30:26 -- common/autotest_common.sh@1573 -- # get_nvme_bdfs 00:04:37.374 23:30:26 -- common/autotest_common.sh@1507 -- # bdfs=() 00:04:37.374 23:30:26 -- common/autotest_common.sh@1507 -- # local bdfs 00:04:37.374 23:30:26 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:37.374 23:30:26 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:04:37.374 23:30:26 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:37.374 23:30:26 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:04:37.374 23:30:26 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5e:00.0 00:04:37.374 23:30:26 -- common/autotest_common.sh@1573 -- # for bdf in $(get_nvme_bdfs) 00:04:37.374 23:30:26 -- common/autotest_common.sh@1574 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:37.633 23:30:26 -- common/autotest_common.sh@1574 -- # device=0x0a54 00:04:37.633 23:30:26 -- common/autotest_common.sh@1575 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:37.633 23:30:26 -- common/autotest_common.sh@1576 -- # bdfs+=($bdf) 00:04:37.633 23:30:26 -- common/autotest_common.sh@1580 -- # printf '%s\n' 0000:5e:00.0 00:04:37.633 23:30:26 -- common/autotest_common.sh@1586 -- # [[ -z 0000:5e:00.0 ]] 00:04:37.633 23:30:26 -- common/autotest_common.sh@1591 -- # spdk_tgt_pid=821117 00:04:37.633 23:30:26 -- common/autotest_common.sh@1592 -- # waitforlisten 821117 00:04:37.633 23:30:26 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.633 23:30:26 -- common/autotest_common.sh@823 -- # '[' -z 821117 ']' 00:04:37.633 23:30:26 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.633 23:30:26 -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:37.634 23:30:26 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.634 23:30:26 -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:37.634 23:30:26 -- common/autotest_common.sh@10 -- # set +x 00:04:37.634 [2024-07-15 23:30:26.402971] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:04:37.634 [2024-07-15 23:30:26.403015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821117 ] 00:04:37.634 [2024-07-15 23:30:26.455708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.634 [2024-07-15 23:30:26.536869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.570 23:30:27 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:38.570 23:30:27 -- common/autotest_common.sh@856 -- # return 0 00:04:38.570 23:30:27 -- common/autotest_common.sh@1594 -- # bdf_id=0 00:04:38.570 23:30:27 -- common/autotest_common.sh@1595 -- # for bdf in "${bdfs[@]}" 00:04:38.570 23:30:27 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:41.860 nvme0n1 00:04:41.860 23:30:30 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:41.860 [2024-07-15 23:30:30.323481] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:41.860 request: 00:04:41.860 { 00:04:41.860 "nvme_ctrlr_name": "nvme0", 00:04:41.860 "password": "test", 00:04:41.860 "method": "bdev_nvme_opal_revert", 00:04:41.860 "req_id": 1 00:04:41.860 } 00:04:41.860 Got JSON-RPC error response 00:04:41.860 response: 00:04:41.860 { 00:04:41.860 "code": -32602, 00:04:41.860 "message": "Invalid parameters" 00:04:41.860 } 00:04:41.860 23:30:30 -- common/autotest_common.sh@1598 -- # true 00:04:41.860 23:30:30 -- common/autotest_common.sh@1599 -- # (( ++bdf_id )) 00:04:41.860 23:30:30 -- common/autotest_common.sh@1602 -- # killprocess 821117 00:04:41.860 23:30:30 -- common/autotest_common.sh@942 -- # '[' -z 821117 ']' 00:04:41.860 23:30:30 -- common/autotest_common.sh@946 -- # kill -0 821117 00:04:41.860 23:30:30 -- common/autotest_common.sh@947 -- # uname 00:04:41.860 23:30:30 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:41.860 23:30:30 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 821117 00:04:41.860 23:30:30 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:41.860 23:30:30 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:41.860 23:30:30 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 821117' 00:04:41.860 killing process with pid 821117 00:04:41.860 23:30:30 -- common/autotest_common.sh@961 -- # kill 821117 00:04:41.860 23:30:30 -- common/autotest_common.sh@966 -- # wait 821117 00:04:43.240 23:30:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:43.240 23:30:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:43.240 23:30:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.240 23:30:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:43.240 23:30:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:43.240 23:30:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.240 23:30:31 -- common/autotest_common.sh@10 -- # set +x 00:04:43.240 23:30:31 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:43.240 23:30:31 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.240 23:30:32 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:43.240 23:30:32 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:43.240 23:30:32 -- common/autotest_common.sh@10 -- # set +x 00:04:43.240 ************************************ 00:04:43.240 START TEST env 00:04:43.240 ************************************ 00:04:43.240 23:30:32 env -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:43.240 * Looking for test storage... 00:04:43.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:43.240 23:30:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.240 23:30:32 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:43.240 23:30:32 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:43.240 23:30:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.240 ************************************ 00:04:43.240 START TEST env_memory 00:04:43.240 ************************************ 00:04:43.240 23:30:32 env.env_memory -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:43.240 00:04:43.240 00:04:43.240 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.240 http://cunit.sourceforge.net/ 00:04:43.240 00:04:43.240 00:04:43.240 Suite: memory 00:04:43.240 Test: alloc and free memory map ...[2024-07-15 23:30:32.194166] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:43.240 passed 00:04:43.240 Test: mem map translation ...[2024-07-15 23:30:32.212869] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:43.240 [2024-07-15 23:30:32.212884] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:43.240 [2024-07-15 23:30:32.212923] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:43.240 [2024-07-15 23:30:32.212931] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:43.501 passed 00:04:43.501 Test: mem map registration ...[2024-07-15 23:30:32.249950] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:43.501 [2024-07-15 23:30:32.249962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:43.501 passed 00:04:43.501 Test: mem map adjacent registrations ...passed 00:04:43.501 00:04:43.501 Run Summary: Type Total Ran Passed Failed Inactive 00:04:43.501 suites 1 1 n/a 0 0 00:04:43.501 tests 4 4 4 0 0 00:04:43.501 asserts 152 152 152 0 n/a 00:04:43.501 00:04:43.501 Elapsed time = 0.136 seconds 00:04:43.501 00:04:43.501 real 0m0.148s 00:04:43.501 user 0m0.135s 00:04:43.501 sys 0m0.012s 00:04:43.501 23:30:32 env.env_memory -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:43.501 23:30:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:43.501 ************************************ 00:04:43.501 END TEST env_memory 00:04:43.501 ************************************ 00:04:43.501 23:30:32 env -- common/autotest_common.sh@1136 -- # return 0 00:04:43.501 23:30:32 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.501 23:30:32 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:43.501 23:30:32 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:43.501 23:30:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:43.501 ************************************ 00:04:43.501 START TEST env_vtophys 00:04:43.501 ************************************ 00:04:43.501 23:30:32 env.env_vtophys -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:43.501 EAL: lib.eal log level changed from notice to debug 00:04:43.501 EAL: Detected lcore 0 as core 0 on socket 0 00:04:43.501 EAL: Detected lcore 1 as core 1 on socket 0 00:04:43.501 EAL: Detected lcore 2 as core 2 on socket 0 00:04:43.501 EAL: Detected lcore 3 as core 3 on socket 0 00:04:43.501 EAL: Detected lcore 4 as core 4 on socket 0 00:04:43.501 EAL: Detected lcore 5 as core 5 on socket 0 00:04:43.501 EAL: Detected lcore 6 as core 6 on socket 0 00:04:43.501 EAL: Detected lcore 7 as core 8 on socket 0 00:04:43.501 EAL: Detected lcore 8 as core 9 on socket 0 00:04:43.501 EAL: Detected lcore 9 as core 10 on socket 0 00:04:43.501 EAL: Detected lcore 10 as core 11 on socket 0 00:04:43.501 EAL: Detected lcore 11 as core 12 on socket 0 00:04:43.501 EAL: Detected lcore 12 as core 13 on socket 0 00:04:43.501 EAL: Detected lcore 13 as core 16 on socket 0 00:04:43.501 EAL: Detected lcore 14 as core 17 on socket 0 00:04:43.501 EAL: Detected lcore 15 as core 18 on socket 0 00:04:43.501 EAL: Detected lcore 16 as core 19 on socket 0 00:04:43.501 EAL: Detected lcore 17 as core 20 on socket 0 00:04:43.501 EAL: Detected lcore 18 as core 21 on socket 0 00:04:43.501 EAL: Detected lcore 19 as core 25 on socket 0 00:04:43.501 EAL: Detected lcore 20 as core 26 on socket 0 00:04:43.501 EAL: Detected lcore 21 as core 27 on socket 0 00:04:43.501 EAL: Detected lcore 22 as core 28 on socket 0 00:04:43.501 EAL: Detected lcore 23 as core 29 on socket 0 00:04:43.501 EAL: Detected lcore 24 as core 0 on socket 1 00:04:43.501 EAL: Detected lcore 25 as core 1 on socket 1 00:04:43.501 EAL: Detected lcore 26 as core 2 on socket 1 00:04:43.501 EAL: Detected lcore 27 as core 3 on socket 1 00:04:43.501 EAL: Detected lcore 28 as core 4 on socket 1 00:04:43.501 EAL: Detected lcore 29 as core 5 on socket 1 00:04:43.501 EAL: Detected lcore 30 as core 6 on socket 1 00:04:43.501 EAL: Detected lcore 31 as core 9 on socket 1 00:04:43.501 EAL: Detected lcore 32 as core 10 on socket 1 00:04:43.501 EAL: Detected lcore 33 as core 11 on socket 1 00:04:43.501 EAL: Detected lcore 34 as core 12 on socket 1 00:04:43.501 EAL: Detected lcore 35 as core 13 on socket 1 00:04:43.501 EAL: Detected lcore 36 as core 16 on socket 1 00:04:43.501 EAL: Detected lcore 37 as core 17 on socket 1 00:04:43.501 EAL: Detected lcore 38 as core 18 on socket 1 00:04:43.501 EAL: Detected lcore 39 as core 19 on socket 1 00:04:43.501 EAL: Detected lcore 40 as core 20 on socket 1 00:04:43.501 EAL: Detected lcore 41 as core 21 on socket 1 00:04:43.501 EAL: Detected lcore 42 as core 24 on socket 1 00:04:43.501 EAL: Detected lcore 43 as core 25 on socket 1 00:04:43.501 EAL: Detected lcore 44 as core 26 on socket 1 00:04:43.501 EAL: Detected lcore 45 as core 27 on socket 1 00:04:43.501 EAL: Detected lcore 46 as core 28 on socket 1 00:04:43.501 EAL: Detected lcore 47 as core 29 on socket 1 00:04:43.501 EAL: Detected lcore 48 as core 0 on socket 0 00:04:43.501 EAL: Detected lcore 49 as core 1 on socket 0 00:04:43.501 EAL: Detected lcore 50 as core 2 on socket 0 00:04:43.501 EAL: Detected lcore 51 as core 3 on socket 0 00:04:43.501 EAL: Detected lcore 52 as core 4 on socket 0 00:04:43.501 EAL: Detected lcore 53 as core 5 on socket 0 00:04:43.501 EAL: Detected lcore 54 as core 6 on socket 0 00:04:43.501 EAL: Detected lcore 55 as core 8 on socket 0 00:04:43.501 EAL: Detected lcore 56 as core 9 on socket 0 00:04:43.501 EAL: Detected lcore 57 as core 10 on socket 0 00:04:43.501 EAL: Detected lcore 58 as core 11 on socket 0 00:04:43.501 EAL: Detected lcore 59 as core 12 on socket 0 00:04:43.501 EAL: Detected lcore 60 as core 13 on socket 0 00:04:43.501 EAL: Detected lcore 61 as core 16 on socket 0 00:04:43.501 EAL: Detected lcore 62 as core 17 on socket 0 00:04:43.501 EAL: Detected lcore 63 as core 18 on socket 0 00:04:43.501 EAL: Detected lcore 64 as core 19 on socket 0 00:04:43.501 EAL: Detected lcore 65 as core 20 on socket 0 00:04:43.501 EAL: Detected lcore 66 as core 21 on socket 0 00:04:43.501 EAL: Detected lcore 67 as core 25 on socket 0 00:04:43.501 EAL: Detected lcore 68 as core 26 on socket 0 00:04:43.501 EAL: Detected lcore 69 as core 27 on socket 0 00:04:43.501 EAL: Detected lcore 70 as core 28 on socket 0 00:04:43.501 EAL: Detected lcore 71 as core 29 on socket 0 00:04:43.501 EAL: Detected lcore 72 as core 0 on socket 1 00:04:43.501 EAL: Detected lcore 73 as core 1 on socket 1 00:04:43.501 EAL: Detected lcore 74 as core 2 on socket 1 00:04:43.501 EAL: Detected lcore 75 as core 3 on socket 1 00:04:43.501 EAL: Detected lcore 76 as core 4 on socket 1 00:04:43.501 EAL: Detected lcore 77 as core 5 on socket 1 00:04:43.501 EAL: Detected lcore 78 as core 6 on socket 1 00:04:43.501 EAL: Detected lcore 79 as core 9 on socket 1 00:04:43.501 EAL: Detected lcore 80 as core 10 on socket 1 00:04:43.501 EAL: Detected lcore 81 as core 11 on socket 1 00:04:43.501 EAL: Detected lcore 82 as core 12 on socket 1 00:04:43.501 EAL: Detected lcore 83 as core 13 on socket 1 00:04:43.501 EAL: Detected lcore 84 as core 16 on socket 1 00:04:43.501 EAL: Detected lcore 85 as core 17 on socket 1 00:04:43.501 EAL: Detected lcore 86 as core 18 on socket 1 00:04:43.501 EAL: Detected lcore 87 as core 19 on socket 1 00:04:43.501 EAL: Detected lcore 88 as core 20 on socket 1 00:04:43.501 EAL: Detected lcore 89 as core 21 on socket 1 00:04:43.501 EAL: Detected lcore 90 as core 24 on socket 1 00:04:43.501 EAL: Detected lcore 91 as core 25 on socket 1 00:04:43.501 EAL: Detected lcore 92 as core 26 on socket 1 00:04:43.501 EAL: Detected lcore 93 as core 27 on socket 1 00:04:43.501 EAL: Detected lcore 94 as core 28 on socket 1 00:04:43.501 EAL: Detected lcore 95 as core 29 on socket 1 00:04:43.501 EAL: Maximum logical cores by configuration: 128 00:04:43.501 EAL: Detected CPU lcores: 96 00:04:43.501 EAL: Detected NUMA nodes: 2 00:04:43.501 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:43.501 EAL: Detected shared linkage of DPDK 00:04:43.501 EAL: No shared files mode enabled, IPC will be disabled 00:04:43.501 EAL: Bus pci wants IOVA as 'DC' 00:04:43.501 EAL: Buses did not request a specific IOVA mode. 00:04:43.501 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:43.501 EAL: Selected IOVA mode 'VA' 00:04:43.501 EAL: Probing VFIO support... 00:04:43.501 EAL: IOMMU type 1 (Type 1) is supported 00:04:43.501 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:43.501 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:43.501 EAL: VFIO support initialized 00:04:43.501 EAL: Ask a virtual area of 0x2e000 bytes 00:04:43.501 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:43.501 EAL: Setting up physically contiguous memory... 00:04:43.501 EAL: Setting maximum number of open files to 524288 00:04:43.501 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:43.501 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:43.501 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:43.501 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.501 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:43.501 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.501 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.501 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:43.501 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:43.501 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.501 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:43.501 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.501 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.501 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:43.501 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:43.501 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.501 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:43.501 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.501 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.501 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:43.501 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:43.501 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.501 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:43.501 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:43.501 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.502 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:43.502 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:43.502 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:43.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.502 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:43.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.502 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:43.502 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:43.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.502 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:43.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.502 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:43.502 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:43.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.502 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:43.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.502 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:43.502 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:43.502 EAL: Ask a virtual area of 0x61000 bytes 00:04:43.502 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:43.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:43.502 EAL: Ask a virtual area of 0x400000000 bytes 00:04:43.502 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:43.502 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:43.502 EAL: Hugepages will be freed exactly as allocated. 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: TSC frequency is ~2300000 KHz 00:04:43.502 EAL: Main lcore 0 is ready (tid=7f3485806a00;cpuset=[0]) 00:04:43.502 EAL: Trying to obtain current memory policy. 00:04:43.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.502 EAL: Restoring previous memory policy: 0 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was expanded by 2MB 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:43.502 EAL: Mem event callback 'spdk:(nil)' registered 00:04:43.502 00:04:43.502 00:04:43.502 CUnit - A unit testing framework for C - Version 2.1-3 00:04:43.502 http://cunit.sourceforge.net/ 00:04:43.502 00:04:43.502 00:04:43.502 Suite: components_suite 00:04:43.502 Test: vtophys_malloc_test ...passed 00:04:43.502 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:43.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.502 EAL: Restoring previous memory policy: 4 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was expanded by 4MB 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was shrunk by 4MB 00:04:43.502 EAL: Trying to obtain current memory policy. 00:04:43.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.502 EAL: Restoring previous memory policy: 4 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was expanded by 6MB 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was shrunk by 6MB 00:04:43.502 EAL: Trying to obtain current memory policy. 00:04:43.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.502 EAL: Restoring previous memory policy: 4 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was expanded by 10MB 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was shrunk by 10MB 00:04:43.502 EAL: Trying to obtain current memory policy. 00:04:43.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.502 EAL: Restoring previous memory policy: 4 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was expanded by 18MB 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was shrunk by 18MB 00:04:43.502 EAL: Trying to obtain current memory policy. 00:04:43.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.502 EAL: Restoring previous memory policy: 4 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was expanded by 34MB 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was shrunk by 34MB 00:04:43.502 EAL: Trying to obtain current memory policy. 00:04:43.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.502 EAL: Restoring previous memory policy: 4 00:04:43.502 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.502 EAL: request: mp_malloc_sync 00:04:43.502 EAL: No shared files mode enabled, IPC is disabled 00:04:43.502 EAL: Heap on socket 0 was expanded by 66MB 00:04:43.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.761 EAL: request: mp_malloc_sync 00:04:43.761 EAL: No shared files mode enabled, IPC is disabled 00:04:43.761 EAL: Heap on socket 0 was shrunk by 66MB 00:04:43.761 EAL: Trying to obtain current memory policy. 00:04:43.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.761 EAL: Restoring previous memory policy: 4 00:04:43.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.761 EAL: request: mp_malloc_sync 00:04:43.761 EAL: No shared files mode enabled, IPC is disabled 00:04:43.761 EAL: Heap on socket 0 was expanded by 130MB 00:04:43.761 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.761 EAL: request: mp_malloc_sync 00:04:43.761 EAL: No shared files mode enabled, IPC is disabled 00:04:43.761 EAL: Heap on socket 0 was shrunk by 130MB 00:04:43.761 EAL: Trying to obtain current memory policy. 00:04:43.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:43.761 EAL: Restoring previous memory policy: 4 00:04:43.762 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.762 EAL: request: mp_malloc_sync 00:04:43.762 EAL: No shared files mode enabled, IPC is disabled 00:04:43.762 EAL: Heap on socket 0 was expanded by 258MB 00:04:43.762 EAL: Calling mem event callback 'spdk:(nil)' 00:04:43.762 EAL: request: mp_malloc_sync 00:04:43.762 EAL: No shared files mode enabled, IPC is disabled 00:04:43.762 EAL: Heap on socket 0 was shrunk by 258MB 00:04:43.762 EAL: Trying to obtain current memory policy. 00:04:43.762 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.021 EAL: Restoring previous memory policy: 4 00:04:44.021 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.021 EAL: request: mp_malloc_sync 00:04:44.021 EAL: No shared files mode enabled, IPC is disabled 00:04:44.021 EAL: Heap on socket 0 was expanded by 514MB 00:04:44.021 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.021 EAL: request: mp_malloc_sync 00:04:44.021 EAL: No shared files mode enabled, IPC is disabled 00:04:44.021 EAL: Heap on socket 0 was shrunk by 514MB 00:04:44.021 EAL: Trying to obtain current memory policy. 00:04:44.021 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.280 EAL: Restoring previous memory policy: 4 00:04:44.280 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.280 EAL: request: mp_malloc_sync 00:04:44.280 EAL: No shared files mode enabled, IPC is disabled 00:04:44.280 EAL: Heap on socket 0 was expanded by 1026MB 00:04:44.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.540 EAL: request: mp_malloc_sync 00:04:44.540 EAL: No shared files mode enabled, IPC is disabled 00:04:44.540 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.540 passed 00:04:44.540 00:04:44.540 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.540 suites 1 1 n/a 0 0 00:04:44.540 tests 2 2 2 0 0 00:04:44.540 asserts 497 497 497 0 n/a 00:04:44.540 00:04:44.540 Elapsed time = 0.965 seconds 00:04:44.540 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.540 EAL: request: mp_malloc_sync 00:04:44.540 EAL: No shared files mode enabled, IPC is disabled 00:04:44.540 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.540 EAL: No shared files mode enabled, IPC is disabled 00:04:44.540 EAL: No shared files mode enabled, IPC is disabled 00:04:44.540 EAL: No shared files mode enabled, IPC is disabled 00:04:44.540 00:04:44.540 real 0m1.072s 00:04:44.540 user 0m0.635s 00:04:44.540 sys 0m0.413s 00:04:44.540 23:30:33 env.env_vtophys -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:44.540 23:30:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.540 ************************************ 00:04:44.540 END TEST env_vtophys 00:04:44.540 ************************************ 00:04:44.540 23:30:33 env -- common/autotest_common.sh@1136 -- # return 0 00:04:44.540 23:30:33 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.540 23:30:33 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:44.540 23:30:33 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:44.540 23:30:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.540 ************************************ 00:04:44.540 START TEST env_pci 00:04:44.540 ************************************ 00:04:44.540 23:30:33 env.env_pci -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:44.799 00:04:44.799 00:04:44.799 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.799 http://cunit.sourceforge.net/ 00:04:44.799 00:04:44.799 00:04:44.799 Suite: pci 00:04:44.799 Test: pci_hook ...[2024-07-15 23:30:33.519748] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 822431 has claimed it 00:04:44.799 EAL: Cannot find device (10000:00:01.0) 00:04:44.799 EAL: Failed to attach device on primary process 00:04:44.799 passed 00:04:44.799 00:04:44.799 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.799 suites 1 1 n/a 0 0 00:04:44.799 tests 1 1 1 0 0 00:04:44.799 asserts 25 25 25 0 n/a 00:04:44.799 00:04:44.799 Elapsed time = 0.025 seconds 00:04:44.799 00:04:44.799 real 0m0.043s 00:04:44.799 user 0m0.014s 00:04:44.799 sys 0m0.029s 00:04:44.799 23:30:33 env.env_pci -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:44.799 23:30:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:44.799 ************************************ 00:04:44.799 END TEST env_pci 00:04:44.799 ************************************ 00:04:44.799 23:30:33 env -- common/autotest_common.sh@1136 -- # return 0 00:04:44.799 23:30:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.799 23:30:33 env -- env/env.sh@15 -- # uname 00:04:44.799 23:30:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.799 23:30:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.799 23:30:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.799 23:30:33 env -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:04:44.799 23:30:33 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:44.799 23:30:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.799 ************************************ 00:04:44.799 START TEST env_dpdk_post_init 00:04:44.799 ************************************ 00:04:44.799 23:30:33 env.env_dpdk_post_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.799 EAL: Detected CPU lcores: 96 00:04:44.799 EAL: Detected NUMA nodes: 2 00:04:44.799 EAL: Detected shared linkage of DPDK 00:04:44.799 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.799 EAL: Selected IOVA mode 'VA' 00:04:44.799 EAL: VFIO support initialized 00:04:44.799 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.799 EAL: Using IOMMU type 1 (Type 1) 00:04:44.799 EAL: Ignore mapping IO port bar(1) 00:04:44.799 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:44.799 EAL: Ignore mapping IO port bar(1) 00:04:44.799 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:44.799 EAL: Ignore mapping IO port bar(1) 00:04:44.799 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:44.799 EAL: Ignore mapping IO port bar(1) 00:04:44.799 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:45.058 EAL: Ignore mapping IO port bar(1) 00:04:45.058 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:45.058 EAL: Ignore mapping IO port bar(1) 00:04:45.058 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:45.058 EAL: Ignore mapping IO port bar(1) 00:04:45.058 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:45.058 EAL: Ignore mapping IO port bar(1) 00:04:45.058 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:45.627 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:45.627 EAL: Ignore mapping IO port bar(1) 00:04:45.627 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:45.627 EAL: Ignore mapping IO port bar(1) 00:04:45.627 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:45.627 EAL: Ignore mapping IO port bar(1) 00:04:45.627 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:45.627 EAL: Ignore mapping IO port bar(1) 00:04:45.627 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:45.887 EAL: Ignore mapping IO port bar(1) 00:04:45.887 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:45.887 EAL: Ignore mapping IO port bar(1) 00:04:45.887 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:45.887 EAL: Ignore mapping IO port bar(1) 00:04:45.887 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:45.887 EAL: Ignore mapping IO port bar(1) 00:04:45.887 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:49.173 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:49.173 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:49.173 Starting DPDK initialization... 00:04:49.173 Starting SPDK post initialization... 00:04:49.173 SPDK NVMe probe 00:04:49.173 Attaching to 0000:5e:00.0 00:04:49.173 Attached to 0000:5e:00.0 00:04:49.173 Cleaning up... 00:04:49.173 00:04:49.173 real 0m4.300s 00:04:49.173 user 0m3.254s 00:04:49.173 sys 0m0.123s 00:04:49.173 23:30:37 env.env_dpdk_post_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:49.173 23:30:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 END TEST env_dpdk_post_init 00:04:49.173 ************************************ 00:04:49.173 23:30:37 env -- common/autotest_common.sh@1136 -- # return 0 00:04:49.173 23:30:37 env -- env/env.sh@26 -- # uname 00:04:49.173 23:30:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:49.173 23:30:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.173 23:30:37 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:49.173 23:30:37 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:49.173 23:30:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 START TEST env_mem_callbacks 00:04:49.173 ************************************ 00:04:49.173 23:30:37 env.env_mem_callbacks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:49.173 EAL: Detected CPU lcores: 96 00:04:49.173 EAL: Detected NUMA nodes: 2 00:04:49.173 EAL: Detected shared linkage of DPDK 00:04:49.173 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.173 EAL: Selected IOVA mode 'VA' 00:04:49.173 EAL: VFIO support initialized 00:04:49.173 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.173 00:04:49.173 00:04:49.173 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.173 http://cunit.sourceforge.net/ 00:04:49.173 00:04:49.173 00:04:49.173 Suite: memory 00:04:49.173 Test: test ... 00:04:49.173 register 0x200000200000 2097152 00:04:49.173 malloc 3145728 00:04:49.173 register 0x200000400000 4194304 00:04:49.173 buf 0x200000500000 len 3145728 PASSED 00:04:49.173 malloc 64 00:04:49.173 buf 0x2000004fff40 len 64 PASSED 00:04:49.173 malloc 4194304 00:04:49.173 register 0x200000800000 6291456 00:04:49.173 buf 0x200000a00000 len 4194304 PASSED 00:04:49.173 free 0x200000500000 3145728 00:04:49.173 free 0x2000004fff40 64 00:04:49.173 unregister 0x200000400000 4194304 PASSED 00:04:49.173 free 0x200000a00000 4194304 00:04:49.173 unregister 0x200000800000 6291456 PASSED 00:04:49.173 malloc 8388608 00:04:49.173 register 0x200000400000 10485760 00:04:49.173 buf 0x200000600000 len 8388608 PASSED 00:04:49.173 free 0x200000600000 8388608 00:04:49.173 unregister 0x200000400000 10485760 PASSED 00:04:49.173 passed 00:04:49.173 00:04:49.173 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.173 suites 1 1 n/a 0 0 00:04:49.173 tests 1 1 1 0 0 00:04:49.173 asserts 15 15 15 0 n/a 00:04:49.173 00:04:49.173 Elapsed time = 0.005 seconds 00:04:49.173 00:04:49.173 real 0m0.054s 00:04:49.173 user 0m0.017s 00:04:49.173 sys 0m0.036s 00:04:49.173 23:30:38 env.env_mem_callbacks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:49.173 23:30:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 END TEST env_mem_callbacks 00:04:49.173 ************************************ 00:04:49.173 23:30:38 env -- common/autotest_common.sh@1136 -- # return 0 00:04:49.173 00:04:49.173 real 0m6.021s 00:04:49.173 user 0m4.208s 00:04:49.173 sys 0m0.889s 00:04:49.173 23:30:38 env -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:49.173 23:30:38 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 END TEST env 00:04:49.173 ************************************ 00:04:49.173 23:30:38 -- common/autotest_common.sh@1136 -- # return 0 00:04:49.173 23:30:38 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.173 23:30:38 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:49.173 23:30:38 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:49.173 23:30:38 -- common/autotest_common.sh@10 -- # set +x 00:04:49.173 ************************************ 00:04:49.173 START TEST rpc 00:04:49.173 ************************************ 00:04:49.173 23:30:38 rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:49.432 * Looking for test storage... 00:04:49.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:49.432 23:30:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=823245 00:04:49.432 23:30:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:49.432 23:30:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.432 23:30:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 823245 00:04:49.432 23:30:38 rpc -- common/autotest_common.sh@823 -- # '[' -z 823245 ']' 00:04:49.432 23:30:38 rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.432 23:30:38 rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:49.432 23:30:38 rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.432 23:30:38 rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:49.432 23:30:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.432 [2024-07-15 23:30:38.246233] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:04:49.432 [2024-07-15 23:30:38.246283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823245 ] 00:04:49.432 [2024-07-15 23:30:38.299928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.432 [2024-07-15 23:30:38.381070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:49.432 [2024-07-15 23:30:38.381106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 823245' to capture a snapshot of events at runtime. 00:04:49.432 [2024-07-15 23:30:38.381113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:49.432 [2024-07-15 23:30:38.381119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:49.432 [2024-07-15 23:30:38.381125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid823245 for offline analysis/debug. 00:04:49.432 [2024-07-15 23:30:38.381143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.368 23:30:39 rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:50.368 23:30:39 rpc -- common/autotest_common.sh@856 -- # return 0 00:04:50.368 23:30:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.368 23:30:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:50.368 23:30:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:50.368 23:30:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:50.368 23:30:39 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:50.368 23:30:39 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:50.368 23:30:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.368 ************************************ 00:04:50.368 START TEST rpc_integrity 00:04:50.368 ************************************ 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.368 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.368 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:50.368 { 00:04:50.368 "name": "Malloc0", 00:04:50.368 "aliases": [ 00:04:50.368 "31eed5ec-7727-4f60-9119-95e8317c658a" 00:04:50.368 ], 00:04:50.368 "product_name": "Malloc disk", 00:04:50.368 "block_size": 512, 00:04:50.368 "num_blocks": 16384, 00:04:50.368 "uuid": "31eed5ec-7727-4f60-9119-95e8317c658a", 00:04:50.368 "assigned_rate_limits": { 00:04:50.368 "rw_ios_per_sec": 0, 00:04:50.368 "rw_mbytes_per_sec": 0, 00:04:50.368 "r_mbytes_per_sec": 0, 00:04:50.368 "w_mbytes_per_sec": 0 00:04:50.368 }, 00:04:50.368 "claimed": false, 00:04:50.368 "zoned": false, 00:04:50.368 "supported_io_types": { 00:04:50.368 "read": true, 00:04:50.368 "write": true, 00:04:50.369 "unmap": true, 00:04:50.369 "flush": true, 00:04:50.369 "reset": true, 00:04:50.369 "nvme_admin": false, 00:04:50.369 "nvme_io": false, 00:04:50.369 "nvme_io_md": false, 00:04:50.369 "write_zeroes": true, 00:04:50.369 "zcopy": true, 00:04:50.369 "get_zone_info": false, 00:04:50.369 "zone_management": false, 00:04:50.369 "zone_append": false, 00:04:50.369 "compare": false, 00:04:50.369 "compare_and_write": false, 00:04:50.369 "abort": true, 00:04:50.369 "seek_hole": false, 00:04:50.369 "seek_data": false, 00:04:50.369 "copy": true, 00:04:50.369 "nvme_iov_md": false 00:04:50.369 }, 00:04:50.369 "memory_domains": [ 00:04:50.369 { 00:04:50.369 "dma_device_id": "system", 00:04:50.369 "dma_device_type": 1 00:04:50.369 }, 00:04:50.369 { 00:04:50.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.369 "dma_device_type": 2 00:04:50.369 } 00:04:50.369 ], 00:04:50.369 "driver_specific": {} 00:04:50.369 } 00:04:50.369 ]' 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 [2024-07-15 23:30:39.209567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:50.369 [2024-07-15 23:30:39.209595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:50.369 [2024-07-15 23:30:39.209607] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6002d0 00:04:50.369 [2024-07-15 23:30:39.209613] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:50.369 [2024-07-15 23:30:39.210691] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:50.369 [2024-07-15 23:30:39.210712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:50.369 Passthru0 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:50.369 { 00:04:50.369 "name": "Malloc0", 00:04:50.369 "aliases": [ 00:04:50.369 "31eed5ec-7727-4f60-9119-95e8317c658a" 00:04:50.369 ], 00:04:50.369 "product_name": "Malloc disk", 00:04:50.369 "block_size": 512, 00:04:50.369 "num_blocks": 16384, 00:04:50.369 "uuid": "31eed5ec-7727-4f60-9119-95e8317c658a", 00:04:50.369 "assigned_rate_limits": { 00:04:50.369 "rw_ios_per_sec": 0, 00:04:50.369 "rw_mbytes_per_sec": 0, 00:04:50.369 "r_mbytes_per_sec": 0, 00:04:50.369 "w_mbytes_per_sec": 0 00:04:50.369 }, 00:04:50.369 "claimed": true, 00:04:50.369 "claim_type": "exclusive_write", 00:04:50.369 "zoned": false, 00:04:50.369 "supported_io_types": { 00:04:50.369 "read": true, 00:04:50.369 "write": true, 00:04:50.369 "unmap": true, 00:04:50.369 "flush": true, 00:04:50.369 "reset": true, 00:04:50.369 "nvme_admin": false, 00:04:50.369 "nvme_io": false, 00:04:50.369 "nvme_io_md": false, 00:04:50.369 "write_zeroes": true, 00:04:50.369 "zcopy": true, 00:04:50.369 "get_zone_info": false, 00:04:50.369 "zone_management": false, 00:04:50.369 "zone_append": false, 00:04:50.369 "compare": false, 00:04:50.369 "compare_and_write": false, 00:04:50.369 "abort": true, 00:04:50.369 "seek_hole": false, 00:04:50.369 "seek_data": false, 00:04:50.369 "copy": true, 00:04:50.369 "nvme_iov_md": false 00:04:50.369 }, 00:04:50.369 "memory_domains": [ 00:04:50.369 { 00:04:50.369 "dma_device_id": "system", 00:04:50.369 "dma_device_type": 1 00:04:50.369 }, 00:04:50.369 { 00:04:50.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.369 "dma_device_type": 2 00:04:50.369 } 00:04:50.369 ], 00:04:50.369 "driver_specific": {} 00:04:50.369 }, 00:04:50.369 { 00:04:50.369 "name": "Passthru0", 00:04:50.369 "aliases": [ 00:04:50.369 "df2e65a1-b3f7-5a1a-90b9-9617d3e737dc" 00:04:50.369 ], 00:04:50.369 "product_name": "passthru", 00:04:50.369 "block_size": 512, 00:04:50.369 "num_blocks": 16384, 00:04:50.369 "uuid": "df2e65a1-b3f7-5a1a-90b9-9617d3e737dc", 00:04:50.369 "assigned_rate_limits": { 00:04:50.369 "rw_ios_per_sec": 0, 00:04:50.369 "rw_mbytes_per_sec": 0, 00:04:50.369 "r_mbytes_per_sec": 0, 00:04:50.369 "w_mbytes_per_sec": 0 00:04:50.369 }, 00:04:50.369 "claimed": false, 00:04:50.369 "zoned": false, 00:04:50.369 "supported_io_types": { 00:04:50.369 "read": true, 00:04:50.369 "write": true, 00:04:50.369 "unmap": true, 00:04:50.369 "flush": true, 00:04:50.369 "reset": true, 00:04:50.369 "nvme_admin": false, 00:04:50.369 "nvme_io": false, 00:04:50.369 "nvme_io_md": false, 00:04:50.369 "write_zeroes": true, 00:04:50.369 "zcopy": true, 00:04:50.369 "get_zone_info": false, 00:04:50.369 "zone_management": false, 00:04:50.369 "zone_append": false, 00:04:50.369 "compare": false, 00:04:50.369 "compare_and_write": false, 00:04:50.369 "abort": true, 00:04:50.369 "seek_hole": false, 00:04:50.369 "seek_data": false, 00:04:50.369 "copy": true, 00:04:50.369 "nvme_iov_md": false 00:04:50.369 }, 00:04:50.369 "memory_domains": [ 00:04:50.369 { 00:04:50.369 "dma_device_id": "system", 00:04:50.369 "dma_device_type": 1 00:04:50.369 }, 00:04:50.369 { 00:04:50.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.369 "dma_device_type": 2 00:04:50.369 } 00:04:50.369 ], 00:04:50.369 "driver_specific": { 00:04:50.369 "passthru": { 00:04:50.369 "name": "Passthru0", 00:04:50.369 "base_bdev_name": "Malloc0" 00:04:50.369 } 00:04:50.369 } 00:04:50.369 } 00:04:50.369 ]' 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.369 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:50.369 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:50.629 23:30:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:50.629 00:04:50.629 real 0m0.280s 00:04:50.629 user 0m0.173s 00:04:50.629 sys 0m0.038s 00:04:50.629 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:50.629 23:30:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 ************************************ 00:04:50.629 END TEST rpc_integrity 00:04:50.629 ************************************ 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:50.629 23:30:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 ************************************ 00:04:50.629 START TEST rpc_plugins 00:04:50.629 ************************************ 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@1117 -- # rpc_plugins 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:50.629 { 00:04:50.629 "name": "Malloc1", 00:04:50.629 "aliases": [ 00:04:50.629 "307c5347-f188-4f0f-863b-94bf97b5350b" 00:04:50.629 ], 00:04:50.629 "product_name": "Malloc disk", 00:04:50.629 "block_size": 4096, 00:04:50.629 "num_blocks": 256, 00:04:50.629 "uuid": "307c5347-f188-4f0f-863b-94bf97b5350b", 00:04:50.629 "assigned_rate_limits": { 00:04:50.629 "rw_ios_per_sec": 0, 00:04:50.629 "rw_mbytes_per_sec": 0, 00:04:50.629 "r_mbytes_per_sec": 0, 00:04:50.629 "w_mbytes_per_sec": 0 00:04:50.629 }, 00:04:50.629 "claimed": false, 00:04:50.629 "zoned": false, 00:04:50.629 "supported_io_types": { 00:04:50.629 "read": true, 00:04:50.629 "write": true, 00:04:50.629 "unmap": true, 00:04:50.629 "flush": true, 00:04:50.629 "reset": true, 00:04:50.629 "nvme_admin": false, 00:04:50.629 "nvme_io": false, 00:04:50.629 "nvme_io_md": false, 00:04:50.629 "write_zeroes": true, 00:04:50.629 "zcopy": true, 00:04:50.629 "get_zone_info": false, 00:04:50.629 "zone_management": false, 00:04:50.629 "zone_append": false, 00:04:50.629 "compare": false, 00:04:50.629 "compare_and_write": false, 00:04:50.629 "abort": true, 00:04:50.629 "seek_hole": false, 00:04:50.629 "seek_data": false, 00:04:50.629 "copy": true, 00:04:50.629 "nvme_iov_md": false 00:04:50.629 }, 00:04:50.629 "memory_domains": [ 00:04:50.629 { 00:04:50.629 "dma_device_id": "system", 00:04:50.629 "dma_device_type": 1 00:04:50.629 }, 00:04:50.629 { 00:04:50.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:50.629 "dma_device_type": 2 00:04:50.629 } 00:04:50.629 ], 00:04:50.629 "driver_specific": {} 00:04:50.629 } 00:04:50.629 ]' 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:50.629 23:30:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:50.629 00:04:50.629 real 0m0.137s 00:04:50.629 user 0m0.082s 00:04:50.629 sys 0m0.019s 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:50.629 23:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:50.629 ************************************ 00:04:50.629 END TEST rpc_plugins 00:04:50.629 ************************************ 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:50.629 23:30:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:50.629 23:30:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.887 ************************************ 00:04:50.887 START TEST rpc_trace_cmd_test 00:04:50.887 ************************************ 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1117 -- # rpc_trace_cmd_test 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:50.887 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid823245", 00:04:50.887 "tpoint_group_mask": "0x8", 00:04:50.887 "iscsi_conn": { 00:04:50.887 "mask": "0x2", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "scsi": { 00:04:50.887 "mask": "0x4", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "bdev": { 00:04:50.887 "mask": "0x8", 00:04:50.887 "tpoint_mask": "0xffffffffffffffff" 00:04:50.887 }, 00:04:50.887 "nvmf_rdma": { 00:04:50.887 "mask": "0x10", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "nvmf_tcp": { 00:04:50.887 "mask": "0x20", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "ftl": { 00:04:50.887 "mask": "0x40", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "blobfs": { 00:04:50.887 "mask": "0x80", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "dsa": { 00:04:50.887 "mask": "0x200", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "thread": { 00:04:50.887 "mask": "0x400", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "nvme_pcie": { 00:04:50.887 "mask": "0x800", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "iaa": { 00:04:50.887 "mask": "0x1000", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "nvme_tcp": { 00:04:50.887 "mask": "0x2000", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "bdev_nvme": { 00:04:50.887 "mask": "0x4000", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 }, 00:04:50.887 "sock": { 00:04:50.887 "mask": "0x8000", 00:04:50.887 "tpoint_mask": "0x0" 00:04:50.887 } 00:04:50.887 }' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:50.887 23:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:50.887 00:04:50.887 real 0m0.221s 00:04:50.887 user 0m0.189s 00:04:50.887 sys 0m0.023s 00:04:50.888 23:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:50.888 23:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:50.888 ************************************ 00:04:50.888 END TEST rpc_trace_cmd_test 00:04:50.888 ************************************ 00:04:51.148 23:30:39 rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:51.148 23:30:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:51.148 23:30:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:51.148 23:30:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:51.148 23:30:39 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:51.148 23:30:39 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:51.148 23:30:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.148 ************************************ 00:04:51.148 START TEST rpc_daemon_integrity 00:04:51.148 ************************************ 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.148 { 00:04:51.148 "name": "Malloc2", 00:04:51.148 "aliases": [ 00:04:51.148 "9c655019-813f-41e2-88f5-90ecd9e85be2" 00:04:51.148 ], 00:04:51.148 "product_name": "Malloc disk", 00:04:51.148 "block_size": 512, 00:04:51.148 "num_blocks": 16384, 00:04:51.148 "uuid": "9c655019-813f-41e2-88f5-90ecd9e85be2", 00:04:51.148 "assigned_rate_limits": { 00:04:51.148 "rw_ios_per_sec": 0, 00:04:51.148 "rw_mbytes_per_sec": 0, 00:04:51.148 "r_mbytes_per_sec": 0, 00:04:51.148 "w_mbytes_per_sec": 0 00:04:51.148 }, 00:04:51.148 "claimed": false, 00:04:51.148 "zoned": false, 00:04:51.148 "supported_io_types": { 00:04:51.148 "read": true, 00:04:51.148 "write": true, 00:04:51.148 "unmap": true, 00:04:51.148 "flush": true, 00:04:51.148 "reset": true, 00:04:51.148 "nvme_admin": false, 00:04:51.148 "nvme_io": false, 00:04:51.148 "nvme_io_md": false, 00:04:51.148 "write_zeroes": true, 00:04:51.148 "zcopy": true, 00:04:51.148 "get_zone_info": false, 00:04:51.148 "zone_management": false, 00:04:51.148 "zone_append": false, 00:04:51.148 "compare": false, 00:04:51.148 "compare_and_write": false, 00:04:51.148 "abort": true, 00:04:51.148 "seek_hole": false, 00:04:51.148 "seek_data": false, 00:04:51.148 "copy": true, 00:04:51.148 "nvme_iov_md": false 00:04:51.148 }, 00:04:51.148 "memory_domains": [ 00:04:51.148 { 00:04:51.148 "dma_device_id": "system", 00:04:51.148 "dma_device_type": 1 00:04:51.148 }, 00:04:51.148 { 00:04:51.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.148 "dma_device_type": 2 00:04:51.148 } 00:04:51.148 ], 00:04:51.148 "driver_specific": {} 00:04:51.148 } 00:04:51.148 ]' 00:04:51.148 23:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.148 [2024-07-15 23:30:40.039836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:51.148 [2024-07-15 23:30:40.039866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.148 [2024-07-15 23:30:40.039878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x797ac0 00:04:51.148 [2024-07-15 23:30:40.039885] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.148 [2024-07-15 23:30:40.040896] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.148 [2024-07-15 23:30:40.040919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.148 Passthru0 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.148 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.148 { 00:04:51.148 "name": "Malloc2", 00:04:51.148 "aliases": [ 00:04:51.148 "9c655019-813f-41e2-88f5-90ecd9e85be2" 00:04:51.148 ], 00:04:51.148 "product_name": "Malloc disk", 00:04:51.148 "block_size": 512, 00:04:51.148 "num_blocks": 16384, 00:04:51.148 "uuid": "9c655019-813f-41e2-88f5-90ecd9e85be2", 00:04:51.148 "assigned_rate_limits": { 00:04:51.148 "rw_ios_per_sec": 0, 00:04:51.148 "rw_mbytes_per_sec": 0, 00:04:51.148 "r_mbytes_per_sec": 0, 00:04:51.148 "w_mbytes_per_sec": 0 00:04:51.148 }, 00:04:51.148 "claimed": true, 00:04:51.148 "claim_type": "exclusive_write", 00:04:51.148 "zoned": false, 00:04:51.148 "supported_io_types": { 00:04:51.148 "read": true, 00:04:51.148 "write": true, 00:04:51.148 "unmap": true, 00:04:51.148 "flush": true, 00:04:51.148 "reset": true, 00:04:51.148 "nvme_admin": false, 00:04:51.148 "nvme_io": false, 00:04:51.149 "nvme_io_md": false, 00:04:51.149 "write_zeroes": true, 00:04:51.149 "zcopy": true, 00:04:51.149 "get_zone_info": false, 00:04:51.149 "zone_management": false, 00:04:51.149 "zone_append": false, 00:04:51.149 "compare": false, 00:04:51.149 "compare_and_write": false, 00:04:51.149 "abort": true, 00:04:51.149 "seek_hole": false, 00:04:51.149 "seek_data": false, 00:04:51.149 "copy": true, 00:04:51.149 "nvme_iov_md": false 00:04:51.149 }, 00:04:51.149 "memory_domains": [ 00:04:51.149 { 00:04:51.149 "dma_device_id": "system", 00:04:51.149 "dma_device_type": 1 00:04:51.149 }, 00:04:51.149 { 00:04:51.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.149 "dma_device_type": 2 00:04:51.149 } 00:04:51.149 ], 00:04:51.149 "driver_specific": {} 00:04:51.149 }, 00:04:51.149 { 00:04:51.149 "name": "Passthru0", 00:04:51.149 "aliases": [ 00:04:51.149 "cd39498d-e07a-5bc4-ab24-35ced008caf4" 00:04:51.149 ], 00:04:51.149 "product_name": "passthru", 00:04:51.149 "block_size": 512, 00:04:51.149 "num_blocks": 16384, 00:04:51.149 "uuid": "cd39498d-e07a-5bc4-ab24-35ced008caf4", 00:04:51.149 "assigned_rate_limits": { 00:04:51.149 "rw_ios_per_sec": 0, 00:04:51.149 "rw_mbytes_per_sec": 0, 00:04:51.149 "r_mbytes_per_sec": 0, 00:04:51.149 "w_mbytes_per_sec": 0 00:04:51.149 }, 00:04:51.149 "claimed": false, 00:04:51.149 "zoned": false, 00:04:51.149 "supported_io_types": { 00:04:51.149 "read": true, 00:04:51.149 "write": true, 00:04:51.149 "unmap": true, 00:04:51.149 "flush": true, 00:04:51.149 "reset": true, 00:04:51.149 "nvme_admin": false, 00:04:51.149 "nvme_io": false, 00:04:51.149 "nvme_io_md": false, 00:04:51.149 "write_zeroes": true, 00:04:51.149 "zcopy": true, 00:04:51.149 "get_zone_info": false, 00:04:51.149 "zone_management": false, 00:04:51.149 "zone_append": false, 00:04:51.149 "compare": false, 00:04:51.149 "compare_and_write": false, 00:04:51.149 "abort": true, 00:04:51.149 "seek_hole": false, 00:04:51.149 "seek_data": false, 00:04:51.149 "copy": true, 00:04:51.149 "nvme_iov_md": false 00:04:51.149 }, 00:04:51.149 "memory_domains": [ 00:04:51.149 { 00:04:51.149 "dma_device_id": "system", 00:04:51.149 "dma_device_type": 1 00:04:51.149 }, 00:04:51.149 { 00:04:51.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.149 "dma_device_type": 2 00:04:51.149 } 00:04:51.149 ], 00:04:51.149 "driver_specific": { 00:04:51.149 "passthru": { 00:04:51.149 "name": "Passthru0", 00:04:51.149 "base_bdev_name": "Malloc2" 00:04:51.149 } 00:04:51.149 } 00:04:51.149 } 00:04:51.149 ]' 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.149 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.409 00:04:51.409 real 0m0.252s 00:04:51.409 user 0m0.162s 00:04:51.409 sys 0m0.027s 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:51.409 23:30:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.409 ************************************ 00:04:51.409 END TEST rpc_daemon_integrity 00:04:51.409 ************************************ 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:51.409 23:30:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:51.409 23:30:40 rpc -- rpc/rpc.sh@84 -- # killprocess 823245 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@942 -- # '[' -z 823245 ']' 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@946 -- # kill -0 823245 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@947 -- # uname 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 823245 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 823245' 00:04:51.409 killing process with pid 823245 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@961 -- # kill 823245 00:04:51.409 23:30:40 rpc -- common/autotest_common.sh@966 -- # wait 823245 00:04:51.669 00:04:51.669 real 0m2.432s 00:04:51.669 user 0m3.149s 00:04:51.669 sys 0m0.634s 00:04:51.669 23:30:40 rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:51.669 23:30:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.669 ************************************ 00:04:51.669 END TEST rpc 00:04:51.669 ************************************ 00:04:51.669 23:30:40 -- common/autotest_common.sh@1136 -- # return 0 00:04:51.669 23:30:40 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:51.669 23:30:40 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:51.669 23:30:40 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:51.669 23:30:40 -- common/autotest_common.sh@10 -- # set +x 00:04:51.669 ************************************ 00:04:51.669 START TEST skip_rpc 00:04:51.669 ************************************ 00:04:51.669 23:30:40 skip_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:51.930 * Looking for test storage... 00:04:51.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.930 23:30:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.930 23:30:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:51.930 23:30:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:51.930 23:30:40 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:51.930 23:30:40 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:51.930 23:30:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.930 ************************************ 00:04:51.930 START TEST skip_rpc 00:04:51.930 ************************************ 00:04:51.930 23:30:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1117 -- # test_skip_rpc 00:04:51.930 23:30:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=823898 00:04:51.930 23:30:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.930 23:30:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:51.930 23:30:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:51.930 [2024-07-15 23:30:40.762988] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:04:51.930 [2024-07-15 23:30:40.763026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823898 ] 00:04:51.930 [2024-07-15 23:30:40.817798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.930 [2024-07-15 23:30:40.891688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # local es=0 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # rpc_cmd spdk_get_version 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # es=1 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 823898 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@942 -- # '[' -z 823898 ']' 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # kill -0 823898 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # uname 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 823898 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 823898' 00:04:57.270 killing process with pid 823898 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill 823898 00:04:57.270 23:30:45 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # wait 823898 00:04:57.270 00:04:57.270 real 0m5.370s 00:04:57.270 user 0m5.139s 00:04:57.270 sys 0m0.261s 00:04:57.270 23:30:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:57.270 23:30:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.270 ************************************ 00:04:57.270 END TEST skip_rpc 00:04:57.270 ************************************ 00:04:57.270 23:30:46 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:04:57.270 23:30:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:57.270 23:30:46 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:57.270 23:30:46 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:57.270 23:30:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.270 ************************************ 00:04:57.270 START TEST skip_rpc_with_json 00:04:57.270 ************************************ 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_json 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=824841 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 824841 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@823 -- # '[' -z 824841 ']' 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # local max_retries=100 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # xtrace_disable 00:04:57.270 23:30:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.270 [2024-07-15 23:30:46.202327] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:04:57.270 [2024-07-15 23:30:46.202368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid824841 ] 00:04:57.530 [2024-07-15 23:30:46.256193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.530 [2024-07-15 23:30:46.324833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # return 0 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.099 [2024-07-15 23:30:47.009791] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:58.099 request: 00:04:58.099 { 00:04:58.099 "trtype": "tcp", 00:04:58.099 "method": "nvmf_get_transports", 00:04:58.099 "req_id": 1 00:04:58.099 } 00:04:58.099 Got JSON-RPC error response 00:04:58.099 response: 00:04:58.099 { 00:04:58.099 "code": -19, 00:04:58.099 "message": "No such device" 00:04:58.099 } 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.099 [2024-07-15 23:30:47.021893] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:04:58.099 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.364 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:04:58.364 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.364 { 00:04:58.364 "subsystems": [ 00:04:58.364 { 00:04:58.364 "subsystem": "vfio_user_target", 00:04:58.364 "config": null 00:04:58.364 }, 00:04:58.364 { 00:04:58.364 "subsystem": "keyring", 00:04:58.364 "config": [] 00:04:58.364 }, 00:04:58.364 { 00:04:58.364 "subsystem": "iobuf", 00:04:58.364 "config": [ 00:04:58.364 { 00:04:58.364 "method": "iobuf_set_options", 00:04:58.364 "params": { 00:04:58.364 "small_pool_count": 8192, 00:04:58.364 "large_pool_count": 1024, 00:04:58.364 "small_bufsize": 8192, 00:04:58.364 "large_bufsize": 135168 00:04:58.364 } 00:04:58.364 } 00:04:58.364 ] 00:04:58.364 }, 00:04:58.364 { 00:04:58.364 "subsystem": "sock", 00:04:58.364 "config": [ 00:04:58.364 { 00:04:58.364 "method": "sock_set_default_impl", 00:04:58.364 "params": { 00:04:58.364 "impl_name": "posix" 00:04:58.364 } 00:04:58.364 }, 00:04:58.364 { 00:04:58.364 "method": "sock_impl_set_options", 00:04:58.364 "params": { 00:04:58.364 "impl_name": "ssl", 00:04:58.364 "recv_buf_size": 4096, 00:04:58.364 "send_buf_size": 4096, 00:04:58.364 "enable_recv_pipe": true, 00:04:58.364 "enable_quickack": false, 00:04:58.364 "enable_placement_id": 0, 00:04:58.364 "enable_zerocopy_send_server": true, 00:04:58.364 "enable_zerocopy_send_client": false, 00:04:58.365 "zerocopy_threshold": 0, 00:04:58.365 "tls_version": 0, 00:04:58.365 "enable_ktls": false 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "sock_impl_set_options", 00:04:58.365 "params": { 00:04:58.365 "impl_name": "posix", 00:04:58.365 "recv_buf_size": 2097152, 00:04:58.365 "send_buf_size": 2097152, 00:04:58.365 "enable_recv_pipe": true, 00:04:58.365 "enable_quickack": false, 00:04:58.365 "enable_placement_id": 0, 00:04:58.365 "enable_zerocopy_send_server": true, 00:04:58.365 "enable_zerocopy_send_client": false, 00:04:58.365 "zerocopy_threshold": 0, 00:04:58.365 "tls_version": 0, 00:04:58.365 "enable_ktls": false 00:04:58.365 } 00:04:58.365 } 00:04:58.365 ] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "vmd", 00:04:58.365 "config": [] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "accel", 00:04:58.365 "config": [ 00:04:58.365 { 00:04:58.365 "method": "accel_set_options", 00:04:58.365 "params": { 00:04:58.365 "small_cache_size": 128, 00:04:58.365 "large_cache_size": 16, 00:04:58.365 "task_count": 2048, 00:04:58.365 "sequence_count": 2048, 00:04:58.365 "buf_count": 2048 00:04:58.365 } 00:04:58.365 } 00:04:58.365 ] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "bdev", 00:04:58.365 "config": [ 00:04:58.365 { 00:04:58.365 "method": "bdev_set_options", 00:04:58.365 "params": { 00:04:58.365 "bdev_io_pool_size": 65535, 00:04:58.365 "bdev_io_cache_size": 256, 00:04:58.365 "bdev_auto_examine": true, 00:04:58.365 "iobuf_small_cache_size": 128, 00:04:58.365 "iobuf_large_cache_size": 16 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "bdev_raid_set_options", 00:04:58.365 "params": { 00:04:58.365 "process_window_size_kb": 1024 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "bdev_iscsi_set_options", 00:04:58.365 "params": { 00:04:58.365 "timeout_sec": 30 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "bdev_nvme_set_options", 00:04:58.365 "params": { 00:04:58.365 "action_on_timeout": "none", 00:04:58.365 "timeout_us": 0, 00:04:58.365 "timeout_admin_us": 0, 00:04:58.365 "keep_alive_timeout_ms": 10000, 00:04:58.365 "arbitration_burst": 0, 00:04:58.365 "low_priority_weight": 0, 00:04:58.365 "medium_priority_weight": 0, 00:04:58.365 "high_priority_weight": 0, 00:04:58.365 "nvme_adminq_poll_period_us": 10000, 00:04:58.365 "nvme_ioq_poll_period_us": 0, 00:04:58.365 "io_queue_requests": 0, 00:04:58.365 "delay_cmd_submit": true, 00:04:58.365 "transport_retry_count": 4, 00:04:58.365 "bdev_retry_count": 3, 00:04:58.365 "transport_ack_timeout": 0, 00:04:58.365 "ctrlr_loss_timeout_sec": 0, 00:04:58.365 "reconnect_delay_sec": 0, 00:04:58.365 "fast_io_fail_timeout_sec": 0, 00:04:58.365 "disable_auto_failback": false, 00:04:58.365 "generate_uuids": false, 00:04:58.365 "transport_tos": 0, 00:04:58.365 "nvme_error_stat": false, 00:04:58.365 "rdma_srq_size": 0, 00:04:58.365 "io_path_stat": false, 00:04:58.365 "allow_accel_sequence": false, 00:04:58.365 "rdma_max_cq_size": 0, 00:04:58.365 "rdma_cm_event_timeout_ms": 0, 00:04:58.365 "dhchap_digests": [ 00:04:58.365 "sha256", 00:04:58.365 "sha384", 00:04:58.365 "sha512" 00:04:58.365 ], 00:04:58.365 "dhchap_dhgroups": [ 00:04:58.365 "null", 00:04:58.365 "ffdhe2048", 00:04:58.365 "ffdhe3072", 00:04:58.365 "ffdhe4096", 00:04:58.365 "ffdhe6144", 00:04:58.365 "ffdhe8192" 00:04:58.365 ] 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "bdev_nvme_set_hotplug", 00:04:58.365 "params": { 00:04:58.365 "period_us": 100000, 00:04:58.365 "enable": false 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "bdev_wait_for_examine" 00:04:58.365 } 00:04:58.365 ] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "scsi", 00:04:58.365 "config": null 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "scheduler", 00:04:58.365 "config": [ 00:04:58.365 { 00:04:58.365 "method": "framework_set_scheduler", 00:04:58.365 "params": { 00:04:58.365 "name": "static" 00:04:58.365 } 00:04:58.365 } 00:04:58.365 ] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "vhost_scsi", 00:04:58.365 "config": [] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "vhost_blk", 00:04:58.365 "config": [] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "ublk", 00:04:58.365 "config": [] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "nbd", 00:04:58.365 "config": [] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "nvmf", 00:04:58.365 "config": [ 00:04:58.365 { 00:04:58.365 "method": "nvmf_set_config", 00:04:58.365 "params": { 00:04:58.365 "discovery_filter": "match_any", 00:04:58.365 "admin_cmd_passthru": { 00:04:58.365 "identify_ctrlr": false 00:04:58.365 } 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "nvmf_set_max_subsystems", 00:04:58.365 "params": { 00:04:58.365 "max_subsystems": 1024 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "nvmf_set_crdt", 00:04:58.365 "params": { 00:04:58.365 "crdt1": 0, 00:04:58.365 "crdt2": 0, 00:04:58.365 "crdt3": 0 00:04:58.365 } 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "method": "nvmf_create_transport", 00:04:58.365 "params": { 00:04:58.365 "trtype": "TCP", 00:04:58.365 "max_queue_depth": 128, 00:04:58.365 "max_io_qpairs_per_ctrlr": 127, 00:04:58.365 "in_capsule_data_size": 4096, 00:04:58.365 "max_io_size": 131072, 00:04:58.365 "io_unit_size": 131072, 00:04:58.365 "max_aq_depth": 128, 00:04:58.365 "num_shared_buffers": 511, 00:04:58.365 "buf_cache_size": 4294967295, 00:04:58.365 "dif_insert_or_strip": false, 00:04:58.365 "zcopy": false, 00:04:58.365 "c2h_success": true, 00:04:58.365 "sock_priority": 0, 00:04:58.365 "abort_timeout_sec": 1, 00:04:58.365 "ack_timeout": 0, 00:04:58.365 "data_wr_pool_size": 0 00:04:58.365 } 00:04:58.365 } 00:04:58.365 ] 00:04:58.365 }, 00:04:58.365 { 00:04:58.365 "subsystem": "iscsi", 00:04:58.365 "config": [ 00:04:58.365 { 00:04:58.365 "method": "iscsi_set_options", 00:04:58.365 "params": { 00:04:58.365 "node_base": "iqn.2016-06.io.spdk", 00:04:58.365 "max_sessions": 128, 00:04:58.365 "max_connections_per_session": 2, 00:04:58.365 "max_queue_depth": 64, 00:04:58.365 "default_time2wait": 2, 00:04:58.365 "default_time2retain": 20, 00:04:58.365 "first_burst_length": 8192, 00:04:58.365 "immediate_data": true, 00:04:58.365 "allow_duplicated_isid": false, 00:04:58.365 "error_recovery_level": 0, 00:04:58.365 "nop_timeout": 60, 00:04:58.365 "nop_in_interval": 30, 00:04:58.365 "disable_chap": false, 00:04:58.365 "require_chap": false, 00:04:58.365 "mutual_chap": false, 00:04:58.365 "chap_group": 0, 00:04:58.365 "max_large_datain_per_connection": 64, 00:04:58.365 "max_r2t_per_connection": 4, 00:04:58.365 "pdu_pool_size": 36864, 00:04:58.365 "immediate_data_pool_size": 16384, 00:04:58.365 "data_out_pool_size": 2048 00:04:58.365 } 00:04:58.365 } 00:04:58.365 ] 00:04:58.365 } 00:04:58.365 ] 00:04:58.365 } 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 824841 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 824841 ']' 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 824841 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 824841 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 824841' 00:04:58.365 killing process with pid 824841 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 824841 00:04:58.365 23:30:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 824841 00:04:58.624 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=825083 00:04:58.624 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.624 23:30:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 825083 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 825083 ']' 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 825083 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 825083 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 825083' 00:05:03.899 killing process with pid 825083 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 825083 00:05:03.899 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 825083 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.158 00:05:04.158 real 0m6.752s 00:05:04.158 user 0m6.608s 00:05:04.158 sys 0m0.568s 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.158 ************************************ 00:05:04.158 END TEST skip_rpc_with_json 00:05:04.158 ************************************ 00:05:04.158 23:30:52 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:04.158 23:30:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.158 23:30:52 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:04.158 23:30:52 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:04.158 23:30:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.158 ************************************ 00:05:04.158 START TEST skip_rpc_with_delay 00:05:04.158 ************************************ 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_delay 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # local es=0 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.158 23:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.158 [2024-07-15 23:30:53.015154] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:04.158 [2024-07-15 23:30:53.015232] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:04.158 23:30:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # es=1 00:05:04.158 23:30:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:04.158 23:30:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:04.158 23:30:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:04.158 00:05:04.158 real 0m0.063s 00:05:04.158 user 0m0.043s 00:05:04.158 sys 0m0.019s 00:05:04.158 23:30:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:04.158 23:30:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:04.158 ************************************ 00:05:04.158 END TEST skip_rpc_with_delay 00:05:04.158 ************************************ 00:05:04.158 23:30:53 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:04.158 23:30:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:04.158 23:30:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:04.158 23:30:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:04.158 23:30:53 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:04.158 23:30:53 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:04.158 23:30:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.158 ************************************ 00:05:04.158 START TEST exit_on_failed_rpc_init 00:05:04.158 ************************************ 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1117 -- # test_exit_on_failed_rpc_init 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=826054 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 826054 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@823 -- # '[' -z 826054 ']' 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:04.158 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:04.158 [2024-07-15 23:30:53.127178] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:04.158 [2024-07-15 23:30:53.127218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826054 ] 00:05:04.424 [2024-07-15 23:30:53.179773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.424 [2024-07-15 23:30:53.259899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # return 0 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # local es=0 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.992 23:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.992 [2024-07-15 23:30:53.952706] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:04.992 [2024-07-15 23:30:53.952753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826289 ] 00:05:05.250 [2024-07-15 23:30:54.006269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.250 [2024-07-15 23:30:54.078674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.250 [2024-07-15 23:30:54.078741] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:05.250 [2024-07-15 23:30:54.078750] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:05.250 [2024-07-15 23:30:54.078756] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.250 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # es=234 00:05:05.250 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:05.250 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # es=106 00:05:05.250 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # case "$es" in 00:05:05.250 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=1 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 826054 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@942 -- # '[' -z 826054 ']' 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # kill -0 826054 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # uname 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 826054 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 826054' 00:05:05.251 killing process with pid 826054 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # kill 826054 00:05:05.251 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # wait 826054 00:05:05.817 00:05:05.817 real 0m1.424s 00:05:05.817 user 0m1.653s 00:05:05.817 sys 0m0.361s 00:05:05.817 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:05.817 23:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.817 ************************************ 00:05:05.817 END TEST exit_on_failed_rpc_init 00:05:05.817 ************************************ 00:05:05.817 23:30:54 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:05.817 23:30:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:05.817 00:05:05.817 real 0m13.931s 00:05:05.817 user 0m13.573s 00:05:05.817 sys 0m1.426s 00:05:05.817 23:30:54 skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:05.817 23:30:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.817 ************************************ 00:05:05.817 END TEST skip_rpc 00:05:05.817 ************************************ 00:05:05.817 23:30:54 -- common/autotest_common.sh@1136 -- # return 0 00:05:05.817 23:30:54 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:05.817 23:30:54 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:05.817 23:30:54 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:05.817 23:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:05.817 ************************************ 00:05:05.817 START TEST rpc_client 00:05:05.817 ************************************ 00:05:05.817 23:30:54 rpc_client -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:05.817 * Looking for test storage... 00:05:05.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:05.817 23:30:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:05.817 OK 00:05:05.817 23:30:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:05.817 00:05:05.817 real 0m0.107s 00:05:05.817 user 0m0.043s 00:05:05.817 sys 0m0.070s 00:05:05.817 23:30:54 rpc_client -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:05.817 23:30:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:05.817 ************************************ 00:05:05.817 END TEST rpc_client 00:05:05.817 ************************************ 00:05:05.817 23:30:54 -- common/autotest_common.sh@1136 -- # return 0 00:05:05.817 23:30:54 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:05.817 23:30:54 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:05.817 23:30:54 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:05.817 23:30:54 -- common/autotest_common.sh@10 -- # set +x 00:05:05.817 ************************************ 00:05:05.817 START TEST json_config 00:05:05.817 ************************************ 00:05:05.817 23:30:54 json_config -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.075 23:30:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.075 23:30:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.075 23:30:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.075 23:30:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.075 23:30:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.075 23:30:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.075 23:30:54 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.075 23:30:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@47 -- # : 0 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:06.075 23:30:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:06.075 INFO: JSON configuration test init 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.075 23:30:54 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:06.075 23:30:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.075 23:30:54 json_config -- json_config/common.sh@10 -- # shift 00:05:06.075 23:30:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.075 23:30:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.075 23:30:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.075 23:30:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.075 23:30:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.075 23:30:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=826415 00:05:06.075 23:30:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.075 Waiting for target to run... 00:05:06.075 23:30:54 json_config -- json_config/common.sh@25 -- # waitforlisten 826415 /var/tmp/spdk_tgt.sock 00:05:06.075 23:30:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@823 -- # '[' -z 826415 ']' 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:06.075 23:30:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.075 [2024-07-15 23:30:54.934585] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:06.075 [2024-07-15 23:30:54.934633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826415 ] 00:05:06.642 [2024-07-15 23:30:55.374472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.642 [2024-07-15 23:30:55.465856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.900 23:30:55 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:06.900 23:30:55 json_config -- common/autotest_common.sh@856 -- # return 0 00:05:06.900 23:30:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:06.900 00:05:06.900 23:30:55 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:06.900 23:30:55 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:06.900 23:30:55 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.900 23:30:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.900 23:30:55 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:06.900 23:30:55 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:06.900 23:30:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.900 23:30:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.900 23:30:55 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:06.900 23:30:55 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:06.901 23:30:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.200 23:30:58 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:10.200 23:30:58 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.200 23:30:58 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.200 23:30:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.200 23:30:58 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.200 23:30:58 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.200 23:30:58 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.200 23:30:58 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:10.200 23:30:58 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:10.200 23:30:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:10.200 23:30:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.200 23:30:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:10.200 23:30:59 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.200 23:30:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:10.200 23:30:59 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.200 23:30:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.457 MallocForNvmf0 00:05:10.457 23:30:59 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.457 23:30:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.457 MallocForNvmf1 00:05:10.457 23:30:59 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.457 23:30:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:10.715 [2024-07-15 23:30:59.532909] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.715 23:30:59 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.715 23:30:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:10.974 23:30:59 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.974 23:30:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:10.974 23:30:59 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:10.974 23:30:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.232 23:31:00 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.233 23:31:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:11.233 [2024-07-15 23:31:00.199014] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:11.491 23:31:00 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:11.491 23:31:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.491 23:31:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 23:31:00 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:11.491 23:31:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.491 23:31:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 23:31:00 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:11.491 23:31:00 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.491 23:31:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:11.491 MallocBdevForConfigChangeCheck 00:05:11.750 23:31:00 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:11.750 23:31:00 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:11.750 23:31:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.750 23:31:00 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:11.750 23:31:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:12.010 23:31:00 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:12.010 INFO: shutting down applications... 00:05:12.010 23:31:00 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:12.010 23:31:00 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:12.010 23:31:00 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:12.010 23:31:00 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:13.916 Calling clear_iscsi_subsystem 00:05:13.916 Calling clear_nvmf_subsystem 00:05:13.916 Calling clear_nbd_subsystem 00:05:13.916 Calling clear_ublk_subsystem 00:05:13.916 Calling clear_vhost_blk_subsystem 00:05:13.916 Calling clear_vhost_scsi_subsystem 00:05:13.916 Calling clear_bdev_subsystem 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@345 -- # break 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:13.916 23:31:02 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:13.916 23:31:02 json_config -- json_config/common.sh@31 -- # local app=target 00:05:13.916 23:31:02 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.916 23:31:02 json_config -- json_config/common.sh@35 -- # [[ -n 826415 ]] 00:05:13.916 23:31:02 json_config -- json_config/common.sh@38 -- # kill -SIGINT 826415 00:05:13.916 23:31:02 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.916 23:31:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.916 23:31:02 json_config -- json_config/common.sh@41 -- # kill -0 826415 00:05:13.916 23:31:02 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.494 23:31:03 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.494 23:31:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.494 23:31:03 json_config -- json_config/common.sh@41 -- # kill -0 826415 00:05:14.494 23:31:03 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:14.494 23:31:03 json_config -- json_config/common.sh@43 -- # break 00:05:14.494 23:31:03 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:14.494 23:31:03 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:14.494 SPDK target shutdown done 00:05:14.494 23:31:03 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:14.494 INFO: relaunching applications... 00:05:14.494 23:31:03 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.494 23:31:03 json_config -- json_config/common.sh@9 -- # local app=target 00:05:14.494 23:31:03 json_config -- json_config/common.sh@10 -- # shift 00:05:14.494 23:31:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.494 23:31:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.494 23:31:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.494 23:31:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.494 23:31:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.494 23:31:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=828043 00:05:14.494 23:31:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.494 Waiting for target to run... 00:05:14.494 23:31:03 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:14.494 23:31:03 json_config -- json_config/common.sh@25 -- # waitforlisten 828043 /var/tmp/spdk_tgt.sock 00:05:14.494 23:31:03 json_config -- common/autotest_common.sh@823 -- # '[' -z 828043 ']' 00:05:14.494 23:31:03 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.494 23:31:03 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:14.494 23:31:03 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.494 23:31:03 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:14.494 23:31:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.494 [2024-07-15 23:31:03.276968] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:14.494 [2024-07-15 23:31:03.277023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid828043 ] 00:05:14.754 [2024-07-15 23:31:03.552066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.754 [2024-07-15 23:31:03.619430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.078 [2024-07-15 23:31:06.632831] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.078 [2024-07-15 23:31:06.665149] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:18.078 23:31:06 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:18.078 23:31:06 json_config -- common/autotest_common.sh@856 -- # return 0 00:05:18.078 23:31:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:18.078 00:05:18.078 23:31:06 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:18.078 23:31:06 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:18.078 INFO: Checking if target configuration is the same... 00:05:18.078 23:31:06 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.078 23:31:06 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:18.078 23:31:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.078 + '[' 2 -ne 2 ']' 00:05:18.078 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.078 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.078 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.078 +++ basename /dev/fd/62 00:05:18.078 ++ mktemp /tmp/62.XXX 00:05:18.078 + tmp_file_1=/tmp/62.xPp 00:05:18.078 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.078 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.078 + tmp_file_2=/tmp/spdk_tgt_config.json.F6H 00:05:18.078 + ret=0 00:05:18.078 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.078 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.338 + diff -u /tmp/62.xPp /tmp/spdk_tgt_config.json.F6H 00:05:18.338 + echo 'INFO: JSON config files are the same' 00:05:18.338 INFO: JSON config files are the same 00:05:18.338 + rm /tmp/62.xPp /tmp/spdk_tgt_config.json.F6H 00:05:18.338 + exit 0 00:05:18.338 23:31:07 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:18.338 23:31:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:18.338 INFO: changing configuration and checking if this can be detected... 00:05:18.338 23:31:07 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.338 23:31:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:18.338 23:31:07 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.338 23:31:07 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:18.338 23:31:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.338 + '[' 2 -ne 2 ']' 00:05:18.338 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:18.338 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:18.338 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:18.338 +++ basename /dev/fd/62 00:05:18.338 ++ mktemp /tmp/62.XXX 00:05:18.338 + tmp_file_1=/tmp/62.HOb 00:05:18.338 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.338 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:18.338 + tmp_file_2=/tmp/spdk_tgt_config.json.fsb 00:05:18.338 + ret=0 00:05:18.338 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.596 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:18.856 + diff -u /tmp/62.HOb /tmp/spdk_tgt_config.json.fsb 00:05:18.856 + ret=1 00:05:18.856 + echo '=== Start of file: /tmp/62.HOb ===' 00:05:18.856 + cat /tmp/62.HOb 00:05:18.856 + echo '=== End of file: /tmp/62.HOb ===' 00:05:18.856 + echo '' 00:05:18.856 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fsb ===' 00:05:18.856 + cat /tmp/spdk_tgt_config.json.fsb 00:05:18.856 + echo '=== End of file: /tmp/spdk_tgt_config.json.fsb ===' 00:05:18.856 + echo '' 00:05:18.856 + rm /tmp/62.HOb /tmp/spdk_tgt_config.json.fsb 00:05:18.856 + exit 1 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:18.856 INFO: configuration change detected. 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@317 -- # [[ -n 828043 ]] 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.856 23:31:07 json_config -- json_config/json_config.sh@323 -- # killprocess 828043 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@942 -- # '[' -z 828043 ']' 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@946 -- # kill -0 828043 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@947 -- # uname 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 828043 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@960 -- # echo 'killing process with pid 828043' 00:05:18.856 killing process with pid 828043 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@961 -- # kill 828043 00:05:18.856 23:31:07 json_config -- common/autotest_common.sh@966 -- # wait 828043 00:05:20.235 23:31:09 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.235 23:31:09 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:20.235 23:31:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.235 23:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.495 23:31:09 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:20.495 23:31:09 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:20.495 INFO: Success 00:05:20.495 00:05:20.495 real 0m14.443s 00:05:20.495 user 0m15.121s 00:05:20.495 sys 0m1.828s 00:05:20.495 23:31:09 json_config -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:20.495 23:31:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.495 ************************************ 00:05:20.495 END TEST json_config 00:05:20.495 ************************************ 00:05:20.495 23:31:09 -- common/autotest_common.sh@1136 -- # return 0 00:05:20.495 23:31:09 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:20.495 23:31:09 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:20.495 23:31:09 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:20.495 23:31:09 -- common/autotest_common.sh@10 -- # set +x 00:05:20.495 ************************************ 00:05:20.495 START TEST json_config_extra_key 00:05:20.495 ************************************ 00:05:20.495 23:31:09 json_config_extra_key -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:20.495 23:31:09 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.495 23:31:09 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.495 23:31:09 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.495 23:31:09 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.495 23:31:09 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.495 23:31:09 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.495 23:31:09 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:20.495 23:31:09 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:20.495 23:31:09 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:20.495 INFO: launching applications... 00:05:20.495 23:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=829175 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:20.495 Waiting for target to run... 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 829175 /var/tmp/spdk_tgt.sock 00:05:20.495 23:31:09 json_config_extra_key -- common/autotest_common.sh@823 -- # '[' -z 829175 ']' 00:05:20.495 23:31:09 json_config_extra_key -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:20.495 23:31:09 json_config_extra_key -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:20.495 23:31:09 json_config_extra_key -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:20.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:20.495 23:31:09 json_config_extra_key -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:20.495 23:31:09 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:20.495 23:31:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:20.495 [2024-07-15 23:31:09.397127] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:20.495 [2024-07-15 23:31:09.397177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829175 ] 00:05:20.755 [2024-07-15 23:31:09.668543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.014 [2024-07-15 23:31:09.737169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.273 23:31:10 json_config_extra_key -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:21.273 23:31:10 json_config_extra_key -- common/autotest_common.sh@856 -- # return 0 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:21.273 00:05:21.273 23:31:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:21.273 INFO: shutting down applications... 00:05:21.273 23:31:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 829175 ]] 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 829175 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 829175 00:05:21.273 23:31:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.841 23:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.841 23:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.841 23:31:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 829175 00:05:21.841 23:31:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.841 23:31:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.841 23:31:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.841 23:31:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.841 SPDK target shutdown done 00:05:21.841 23:31:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.841 Success 00:05:21.841 00:05:21.841 real 0m1.413s 00:05:21.841 user 0m1.197s 00:05:21.841 sys 0m0.359s 00:05:21.841 23:31:10 json_config_extra_key -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:21.841 23:31:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.841 ************************************ 00:05:21.841 END TEST json_config_extra_key 00:05:21.841 ************************************ 00:05:21.842 23:31:10 -- common/autotest_common.sh@1136 -- # return 0 00:05:21.842 23:31:10 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.842 23:31:10 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:21.842 23:31:10 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.842 23:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:21.842 ************************************ 00:05:21.842 START TEST alias_rpc 00:05:21.842 ************************************ 00:05:21.842 23:31:10 alias_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:22.101 * Looking for test storage... 00:05:22.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:22.101 23:31:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.101 23:31:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.101 23:31:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=829453 00:05:22.101 23:31:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 829453 00:05:22.101 23:31:10 alias_rpc -- common/autotest_common.sh@823 -- # '[' -z 829453 ']' 00:05:22.101 23:31:10 alias_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.101 23:31:10 alias_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:22.101 23:31:10 alias_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.101 23:31:10 alias_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:22.101 23:31:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.101 [2024-07-15 23:31:10.895197] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:22.101 [2024-07-15 23:31:10.895250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829453 ] 00:05:22.101 [2024-07-15 23:31:10.948583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.101 [2024-07-15 23:31:11.027937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:23.038 23:31:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:23.038 23:31:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 829453 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@942 -- # '[' -z 829453 ']' 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@946 -- # kill -0 829453 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@947 -- # uname 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 829453 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 829453' 00:05:23.038 killing process with pid 829453 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@961 -- # kill 829453 00:05:23.038 23:31:11 alias_rpc -- common/autotest_common.sh@966 -- # wait 829453 00:05:23.298 00:05:23.298 real 0m1.464s 00:05:23.298 user 0m1.620s 00:05:23.298 sys 0m0.367s 00:05:23.298 23:31:12 alias_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:23.298 23:31:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.298 ************************************ 00:05:23.298 END TEST alias_rpc 00:05:23.298 ************************************ 00:05:23.298 23:31:12 -- common/autotest_common.sh@1136 -- # return 0 00:05:23.298 23:31:12 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:23.298 23:31:12 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:23.298 23:31:12 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:23.298 23:31:12 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:23.298 23:31:12 -- common/autotest_common.sh@10 -- # set +x 00:05:23.558 ************************************ 00:05:23.558 START TEST spdkcli_tcp 00:05:23.558 ************************************ 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:23.558 * Looking for test storage... 00:05:23.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=829747 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 829747 00:05:23.558 23:31:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@823 -- # '[' -z 829747 ']' 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:23.558 23:31:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.558 [2024-07-15 23:31:12.426120] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:23.558 [2024-07-15 23:31:12.426171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829747 ] 00:05:23.558 [2024-07-15 23:31:12.479046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.817 [2024-07-15 23:31:12.560543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.817 [2024-07-15 23:31:12.560546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.385 23:31:13 spdkcli_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:24.385 23:31:13 spdkcli_tcp -- common/autotest_common.sh@856 -- # return 0 00:05:24.385 23:31:13 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=829974 00:05:24.385 23:31:13 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.385 23:31:13 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:24.645 [ 00:05:24.645 "bdev_malloc_delete", 00:05:24.645 "bdev_malloc_create", 00:05:24.645 "bdev_null_resize", 00:05:24.645 "bdev_null_delete", 00:05:24.645 "bdev_null_create", 00:05:24.645 "bdev_nvme_cuse_unregister", 00:05:24.645 "bdev_nvme_cuse_register", 00:05:24.645 "bdev_opal_new_user", 00:05:24.645 "bdev_opal_set_lock_state", 00:05:24.645 "bdev_opal_delete", 00:05:24.645 "bdev_opal_get_info", 00:05:24.645 "bdev_opal_create", 00:05:24.645 "bdev_nvme_opal_revert", 00:05:24.645 "bdev_nvme_opal_init", 00:05:24.645 "bdev_nvme_send_cmd", 00:05:24.645 "bdev_nvme_get_path_iostat", 00:05:24.645 "bdev_nvme_get_mdns_discovery_info", 00:05:24.645 "bdev_nvme_stop_mdns_discovery", 00:05:24.645 "bdev_nvme_start_mdns_discovery", 00:05:24.645 "bdev_nvme_set_multipath_policy", 00:05:24.645 "bdev_nvme_set_preferred_path", 00:05:24.645 "bdev_nvme_get_io_paths", 00:05:24.645 "bdev_nvme_remove_error_injection", 00:05:24.645 "bdev_nvme_add_error_injection", 00:05:24.645 "bdev_nvme_get_discovery_info", 00:05:24.645 "bdev_nvme_stop_discovery", 00:05:24.645 "bdev_nvme_start_discovery", 00:05:24.645 "bdev_nvme_get_controller_health_info", 00:05:24.645 "bdev_nvme_disable_controller", 00:05:24.645 "bdev_nvme_enable_controller", 00:05:24.645 "bdev_nvme_reset_controller", 00:05:24.645 "bdev_nvme_get_transport_statistics", 00:05:24.645 "bdev_nvme_apply_firmware", 00:05:24.645 "bdev_nvme_detach_controller", 00:05:24.645 "bdev_nvme_get_controllers", 00:05:24.645 "bdev_nvme_attach_controller", 00:05:24.645 "bdev_nvme_set_hotplug", 00:05:24.645 "bdev_nvme_set_options", 00:05:24.645 "bdev_passthru_delete", 00:05:24.645 "bdev_passthru_create", 00:05:24.645 "bdev_lvol_set_parent_bdev", 00:05:24.645 "bdev_lvol_set_parent", 00:05:24.645 "bdev_lvol_check_shallow_copy", 00:05:24.645 "bdev_lvol_start_shallow_copy", 00:05:24.645 "bdev_lvol_grow_lvstore", 00:05:24.645 "bdev_lvol_get_lvols", 00:05:24.645 "bdev_lvol_get_lvstores", 00:05:24.645 "bdev_lvol_delete", 00:05:24.645 "bdev_lvol_set_read_only", 00:05:24.645 "bdev_lvol_resize", 00:05:24.645 "bdev_lvol_decouple_parent", 00:05:24.645 "bdev_lvol_inflate", 00:05:24.645 "bdev_lvol_rename", 00:05:24.645 "bdev_lvol_clone_bdev", 00:05:24.645 "bdev_lvol_clone", 00:05:24.645 "bdev_lvol_snapshot", 00:05:24.645 "bdev_lvol_create", 00:05:24.645 "bdev_lvol_delete_lvstore", 00:05:24.645 "bdev_lvol_rename_lvstore", 00:05:24.645 "bdev_lvol_create_lvstore", 00:05:24.645 "bdev_raid_set_options", 00:05:24.645 "bdev_raid_remove_base_bdev", 00:05:24.645 "bdev_raid_add_base_bdev", 00:05:24.645 "bdev_raid_delete", 00:05:24.645 "bdev_raid_create", 00:05:24.645 "bdev_raid_get_bdevs", 00:05:24.645 "bdev_error_inject_error", 00:05:24.645 "bdev_error_delete", 00:05:24.645 "bdev_error_create", 00:05:24.645 "bdev_split_delete", 00:05:24.645 "bdev_split_create", 00:05:24.645 "bdev_delay_delete", 00:05:24.645 "bdev_delay_create", 00:05:24.645 "bdev_delay_update_latency", 00:05:24.645 "bdev_zone_block_delete", 00:05:24.645 "bdev_zone_block_create", 00:05:24.645 "blobfs_create", 00:05:24.645 "blobfs_detect", 00:05:24.645 "blobfs_set_cache_size", 00:05:24.645 "bdev_aio_delete", 00:05:24.645 "bdev_aio_rescan", 00:05:24.645 "bdev_aio_create", 00:05:24.645 "bdev_ftl_set_property", 00:05:24.645 "bdev_ftl_get_properties", 00:05:24.645 "bdev_ftl_get_stats", 00:05:24.645 "bdev_ftl_unmap", 00:05:24.645 "bdev_ftl_unload", 00:05:24.645 "bdev_ftl_delete", 00:05:24.645 "bdev_ftl_load", 00:05:24.645 "bdev_ftl_create", 00:05:24.645 "bdev_virtio_attach_controller", 00:05:24.645 "bdev_virtio_scsi_get_devices", 00:05:24.645 "bdev_virtio_detach_controller", 00:05:24.645 "bdev_virtio_blk_set_hotplug", 00:05:24.645 "bdev_iscsi_delete", 00:05:24.645 "bdev_iscsi_create", 00:05:24.645 "bdev_iscsi_set_options", 00:05:24.645 "accel_error_inject_error", 00:05:24.645 "ioat_scan_accel_module", 00:05:24.645 "dsa_scan_accel_module", 00:05:24.645 "iaa_scan_accel_module", 00:05:24.645 "vfu_virtio_create_scsi_endpoint", 00:05:24.645 "vfu_virtio_scsi_remove_target", 00:05:24.645 "vfu_virtio_scsi_add_target", 00:05:24.645 "vfu_virtio_create_blk_endpoint", 00:05:24.645 "vfu_virtio_delete_endpoint", 00:05:24.645 "keyring_file_remove_key", 00:05:24.645 "keyring_file_add_key", 00:05:24.645 "keyring_linux_set_options", 00:05:24.645 "iscsi_get_histogram", 00:05:24.645 "iscsi_enable_histogram", 00:05:24.645 "iscsi_set_options", 00:05:24.645 "iscsi_get_auth_groups", 00:05:24.645 "iscsi_auth_group_remove_secret", 00:05:24.645 "iscsi_auth_group_add_secret", 00:05:24.645 "iscsi_delete_auth_group", 00:05:24.645 "iscsi_create_auth_group", 00:05:24.645 "iscsi_set_discovery_auth", 00:05:24.645 "iscsi_get_options", 00:05:24.645 "iscsi_target_node_request_logout", 00:05:24.645 "iscsi_target_node_set_redirect", 00:05:24.645 "iscsi_target_node_set_auth", 00:05:24.645 "iscsi_target_node_add_lun", 00:05:24.645 "iscsi_get_stats", 00:05:24.645 "iscsi_get_connections", 00:05:24.645 "iscsi_portal_group_set_auth", 00:05:24.645 "iscsi_start_portal_group", 00:05:24.645 "iscsi_delete_portal_group", 00:05:24.645 "iscsi_create_portal_group", 00:05:24.645 "iscsi_get_portal_groups", 00:05:24.645 "iscsi_delete_target_node", 00:05:24.645 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.645 "iscsi_target_node_add_pg_ig_maps", 00:05:24.645 "iscsi_create_target_node", 00:05:24.645 "iscsi_get_target_nodes", 00:05:24.645 "iscsi_delete_initiator_group", 00:05:24.645 "iscsi_initiator_group_remove_initiators", 00:05:24.645 "iscsi_initiator_group_add_initiators", 00:05:24.645 "iscsi_create_initiator_group", 00:05:24.645 "iscsi_get_initiator_groups", 00:05:24.645 "nvmf_set_crdt", 00:05:24.645 "nvmf_set_config", 00:05:24.645 "nvmf_set_max_subsystems", 00:05:24.645 "nvmf_stop_mdns_prr", 00:05:24.645 "nvmf_publish_mdns_prr", 00:05:24.645 "nvmf_subsystem_get_listeners", 00:05:24.645 "nvmf_subsystem_get_qpairs", 00:05:24.645 "nvmf_subsystem_get_controllers", 00:05:24.645 "nvmf_get_stats", 00:05:24.645 "nvmf_get_transports", 00:05:24.645 "nvmf_create_transport", 00:05:24.645 "nvmf_get_targets", 00:05:24.645 "nvmf_delete_target", 00:05:24.645 "nvmf_create_target", 00:05:24.645 "nvmf_subsystem_allow_any_host", 00:05:24.645 "nvmf_subsystem_remove_host", 00:05:24.645 "nvmf_subsystem_add_host", 00:05:24.645 "nvmf_ns_remove_host", 00:05:24.645 "nvmf_ns_add_host", 00:05:24.645 "nvmf_subsystem_remove_ns", 00:05:24.645 "nvmf_subsystem_add_ns", 00:05:24.645 "nvmf_subsystem_listener_set_ana_state", 00:05:24.645 "nvmf_discovery_get_referrals", 00:05:24.645 "nvmf_discovery_remove_referral", 00:05:24.645 "nvmf_discovery_add_referral", 00:05:24.645 "nvmf_subsystem_remove_listener", 00:05:24.645 "nvmf_subsystem_add_listener", 00:05:24.645 "nvmf_delete_subsystem", 00:05:24.645 "nvmf_create_subsystem", 00:05:24.645 "nvmf_get_subsystems", 00:05:24.645 "env_dpdk_get_mem_stats", 00:05:24.645 "nbd_get_disks", 00:05:24.645 "nbd_stop_disk", 00:05:24.645 "nbd_start_disk", 00:05:24.645 "ublk_recover_disk", 00:05:24.645 "ublk_get_disks", 00:05:24.645 "ublk_stop_disk", 00:05:24.645 "ublk_start_disk", 00:05:24.645 "ublk_destroy_target", 00:05:24.645 "ublk_create_target", 00:05:24.645 "virtio_blk_create_transport", 00:05:24.645 "virtio_blk_get_transports", 00:05:24.645 "vhost_controller_set_coalescing", 00:05:24.645 "vhost_get_controllers", 00:05:24.645 "vhost_delete_controller", 00:05:24.645 "vhost_create_blk_controller", 00:05:24.645 "vhost_scsi_controller_remove_target", 00:05:24.645 "vhost_scsi_controller_add_target", 00:05:24.645 "vhost_start_scsi_controller", 00:05:24.645 "vhost_create_scsi_controller", 00:05:24.645 "thread_set_cpumask", 00:05:24.645 "framework_get_governor", 00:05:24.645 "framework_get_scheduler", 00:05:24.645 "framework_set_scheduler", 00:05:24.645 "framework_get_reactors", 00:05:24.645 "thread_get_io_channels", 00:05:24.645 "thread_get_pollers", 00:05:24.645 "thread_get_stats", 00:05:24.645 "framework_monitor_context_switch", 00:05:24.645 "spdk_kill_instance", 00:05:24.645 "log_enable_timestamps", 00:05:24.645 "log_get_flags", 00:05:24.645 "log_clear_flag", 00:05:24.645 "log_set_flag", 00:05:24.645 "log_get_level", 00:05:24.645 "log_set_level", 00:05:24.645 "log_get_print_level", 00:05:24.645 "log_set_print_level", 00:05:24.645 "framework_enable_cpumask_locks", 00:05:24.645 "framework_disable_cpumask_locks", 00:05:24.645 "framework_wait_init", 00:05:24.645 "framework_start_init", 00:05:24.645 "scsi_get_devices", 00:05:24.645 "bdev_get_histogram", 00:05:24.645 "bdev_enable_histogram", 00:05:24.645 "bdev_set_qos_limit", 00:05:24.645 "bdev_set_qd_sampling_period", 00:05:24.645 "bdev_get_bdevs", 00:05:24.645 "bdev_reset_iostat", 00:05:24.645 "bdev_get_iostat", 00:05:24.645 "bdev_examine", 00:05:24.645 "bdev_wait_for_examine", 00:05:24.645 "bdev_set_options", 00:05:24.645 "notify_get_notifications", 00:05:24.645 "notify_get_types", 00:05:24.645 "accel_get_stats", 00:05:24.645 "accel_set_options", 00:05:24.645 "accel_set_driver", 00:05:24.645 "accel_crypto_key_destroy", 00:05:24.645 "accel_crypto_keys_get", 00:05:24.645 "accel_crypto_key_create", 00:05:24.645 "accel_assign_opc", 00:05:24.645 "accel_get_module_info", 00:05:24.645 "accel_get_opc_assignments", 00:05:24.645 "vmd_rescan", 00:05:24.645 "vmd_remove_device", 00:05:24.645 "vmd_enable", 00:05:24.645 "sock_get_default_impl", 00:05:24.645 "sock_set_default_impl", 00:05:24.645 "sock_impl_set_options", 00:05:24.645 "sock_impl_get_options", 00:05:24.645 "iobuf_get_stats", 00:05:24.645 "iobuf_set_options", 00:05:24.645 "keyring_get_keys", 00:05:24.645 "framework_get_pci_devices", 00:05:24.645 "framework_get_config", 00:05:24.645 "framework_get_subsystems", 00:05:24.645 "vfu_tgt_set_base_path", 00:05:24.645 "trace_get_info", 00:05:24.645 "trace_get_tpoint_group_mask", 00:05:24.645 "trace_disable_tpoint_group", 00:05:24.645 "trace_enable_tpoint_group", 00:05:24.645 "trace_clear_tpoint_mask", 00:05:24.645 "trace_set_tpoint_mask", 00:05:24.645 "spdk_get_version", 00:05:24.645 "rpc_get_methods" 00:05:24.645 ] 00:05:24.645 23:31:13 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.645 23:31:13 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.645 23:31:13 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 829747 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@942 -- # '[' -z 829747 ']' 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@946 -- # kill -0 829747 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@947 -- # uname 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 829747 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 829747' 00:05:24.645 killing process with pid 829747 00:05:24.645 23:31:13 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill 829747 00:05:24.646 23:31:13 spdkcli_tcp -- common/autotest_common.sh@966 -- # wait 829747 00:05:24.904 00:05:24.904 real 0m1.504s 00:05:24.904 user 0m2.816s 00:05:24.904 sys 0m0.425s 00:05:24.904 23:31:13 spdkcli_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:24.904 23:31:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.904 ************************************ 00:05:24.904 END TEST spdkcli_tcp 00:05:24.904 ************************************ 00:05:24.904 23:31:13 -- common/autotest_common.sh@1136 -- # return 0 00:05:24.904 23:31:13 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:24.904 23:31:13 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:24.904 23:31:13 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:24.904 23:31:13 -- common/autotest_common.sh@10 -- # set +x 00:05:24.904 ************************************ 00:05:24.904 START TEST dpdk_mem_utility 00:05:24.904 ************************************ 00:05:24.904 23:31:13 dpdk_mem_utility -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.162 * Looking for test storage... 00:05:25.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:25.162 23:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:25.162 23:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=830056 00:05:25.162 23:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 830056 00:05:25.162 23:31:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.162 23:31:13 dpdk_mem_utility -- common/autotest_common.sh@823 -- # '[' -z 830056 ']' 00:05:25.162 23:31:13 dpdk_mem_utility -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.162 23:31:13 dpdk_mem_utility -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:25.162 23:31:13 dpdk_mem_utility -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.162 23:31:13 dpdk_mem_utility -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:25.162 23:31:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.162 [2024-07-15 23:31:13.996537] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:25.162 [2024-07-15 23:31:13.996591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830056 ] 00:05:25.162 [2024-07-15 23:31:14.061943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.420 [2024-07-15 23:31:14.149917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.989 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:25.989 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@856 -- # return 0 00:05:25.989 23:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:25.989 23:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:25.989 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:25.989 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.989 { 00:05:25.989 "filename": "/tmp/spdk_mem_dump.txt" 00:05:25.989 } 00:05:25.989 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:25.989 23:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:25.989 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:25.989 1 heaps totaling size 814.000000 MiB 00:05:25.989 size: 814.000000 MiB heap id: 0 00:05:25.989 end heaps---------- 00:05:25.989 8 mempools totaling size 598.116089 MiB 00:05:25.989 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:25.989 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:25.989 size: 84.521057 MiB name: bdev_io_830056 00:05:25.989 size: 51.011292 MiB name: evtpool_830056 00:05:25.989 size: 50.003479 MiB name: msgpool_830056 00:05:25.989 size: 21.763794 MiB name: PDU_Pool 00:05:25.989 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:25.989 size: 0.026123 MiB name: Session_Pool 00:05:25.989 end mempools------- 00:05:25.989 6 memzones totaling size 4.142822 MiB 00:05:25.989 size: 1.000366 MiB name: RG_ring_0_830056 00:05:25.989 size: 1.000366 MiB name: RG_ring_1_830056 00:05:25.989 size: 1.000366 MiB name: RG_ring_4_830056 00:05:25.989 size: 1.000366 MiB name: RG_ring_5_830056 00:05:25.989 size: 0.125366 MiB name: RG_ring_2_830056 00:05:25.989 size: 0.015991 MiB name: RG_ring_3_830056 00:05:25.989 end memzones------- 00:05:25.989 23:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:25.989 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:25.989 list of free elements. size: 12.519348 MiB 00:05:25.989 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:25.989 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:25.989 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:25.989 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:25.989 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:25.989 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:25.989 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:25.989 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:25.989 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:25.989 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:25.989 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:25.989 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:25.989 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:25.989 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:25.989 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:25.989 list of standard malloc elements. size: 199.218079 MiB 00:05:25.989 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:25.989 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:25.989 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:25.989 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:25.989 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:25.989 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:25.989 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:25.989 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:25.989 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:25.989 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:25.989 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:25.989 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:25.989 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:25.989 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:25.989 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:25.989 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:25.989 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:25.989 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:25.989 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:25.989 list of memzone associated elements. size: 602.262573 MiB 00:05:25.989 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:25.989 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:25.989 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:25.989 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:25.989 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:25.989 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_830056_0 00:05:25.989 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:25.989 associated memzone info: size: 48.002930 MiB name: MP_evtpool_830056_0 00:05:25.989 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:25.989 associated memzone info: size: 48.002930 MiB name: MP_msgpool_830056_0 00:05:25.989 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:25.989 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:25.989 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:25.989 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:25.989 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:25.990 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_830056 00:05:25.990 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:25.990 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_830056 00:05:25.990 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:25.990 associated memzone info: size: 1.007996 MiB name: MP_evtpool_830056 00:05:25.990 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:25.990 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:25.990 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:25.990 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:25.990 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:25.990 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:25.990 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:25.990 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:25.990 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:25.990 associated memzone info: size: 1.000366 MiB name: RG_ring_0_830056 00:05:25.990 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:25.990 associated memzone info: size: 1.000366 MiB name: RG_ring_1_830056 00:05:25.990 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:25.990 associated memzone info: size: 1.000366 MiB name: RG_ring_4_830056 00:05:25.990 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:25.990 associated memzone info: size: 1.000366 MiB name: RG_ring_5_830056 00:05:25.990 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:25.990 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_830056 00:05:25.990 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:25.990 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:25.990 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:25.990 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:25.990 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:25.990 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:25.990 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:25.990 associated memzone info: size: 0.125366 MiB name: RG_ring_2_830056 00:05:25.990 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:25.990 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:25.990 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:25.990 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:25.990 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:25.990 associated memzone info: size: 0.015991 MiB name: RG_ring_3_830056 00:05:25.990 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:25.990 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:25.990 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:25.990 associated memzone info: size: 0.000183 MiB name: MP_msgpool_830056 00:05:25.990 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:25.990 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_830056 00:05:25.990 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:25.990 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:25.990 23:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:25.990 23:31:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 830056 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@942 -- # '[' -z 830056 ']' 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@946 -- # kill -0 830056 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@947 -- # uname 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 830056 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # echo 'killing process with pid 830056' 00:05:25.990 killing process with pid 830056 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill 830056 00:05:25.990 23:31:14 dpdk_mem_utility -- common/autotest_common.sh@966 -- # wait 830056 00:05:26.558 00:05:26.558 real 0m1.380s 00:05:26.558 user 0m1.440s 00:05:26.558 sys 0m0.382s 00:05:26.558 23:31:15 dpdk_mem_utility -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:26.558 23:31:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.558 ************************************ 00:05:26.558 END TEST dpdk_mem_utility 00:05:26.558 ************************************ 00:05:26.558 23:31:15 -- common/autotest_common.sh@1136 -- # return 0 00:05:26.558 23:31:15 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.558 23:31:15 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:26.558 23:31:15 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:26.558 23:31:15 -- common/autotest_common.sh@10 -- # set +x 00:05:26.558 ************************************ 00:05:26.558 START TEST event 00:05:26.558 ************************************ 00:05:26.558 23:31:15 event -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:26.558 * Looking for test storage... 00:05:26.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:26.558 23:31:15 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:26.558 23:31:15 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.558 23:31:15 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.558 23:31:15 event -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:05:26.558 23:31:15 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:26.558 23:31:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.558 ************************************ 00:05:26.558 START TEST event_perf 00:05:26.558 ************************************ 00:05:26.558 23:31:15 event.event_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.558 Running I/O for 1 seconds...[2024-07-15 23:31:15.435398] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:26.558 [2024-07-15 23:31:15.435463] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830405 ] 00:05:26.558 [2024-07-15 23:31:15.495739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.817 [2024-07-15 23:31:15.573506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.817 [2024-07-15 23:31:15.573606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.817 [2024-07-15 23:31:15.573692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.817 [2024-07-15 23:31:15.573694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.753 Running I/O for 1 seconds... 00:05:27.753 lcore 0: 209354 00:05:27.753 lcore 1: 209353 00:05:27.753 lcore 2: 209353 00:05:27.753 lcore 3: 209355 00:05:27.753 done. 00:05:27.753 00:05:27.753 real 0m1.230s 00:05:27.753 user 0m4.144s 00:05:27.753 sys 0m0.083s 00:05:27.753 23:31:16 event.event_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:27.753 23:31:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.753 ************************************ 00:05:27.753 END TEST event_perf 00:05:27.753 ************************************ 00:05:27.753 23:31:16 event -- common/autotest_common.sh@1136 -- # return 0 00:05:27.753 23:31:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:27.753 23:31:16 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:27.753 23:31:16 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:27.753 23:31:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.753 ************************************ 00:05:27.753 START TEST event_reactor 00:05:27.753 ************************************ 00:05:27.753 23:31:16 event.event_reactor -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:28.012 [2024-07-15 23:31:16.731472] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:28.012 [2024-07-15 23:31:16.731539] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830603 ] 00:05:28.012 [2024-07-15 23:31:16.790258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.012 [2024-07-15 23:31:16.863026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.389 test_start 00:05:29.389 oneshot 00:05:29.389 tick 100 00:05:29.389 tick 100 00:05:29.389 tick 250 00:05:29.389 tick 100 00:05:29.389 tick 100 00:05:29.389 tick 100 00:05:29.389 tick 250 00:05:29.389 tick 500 00:05:29.389 tick 100 00:05:29.389 tick 100 00:05:29.389 tick 250 00:05:29.389 tick 100 00:05:29.389 tick 100 00:05:29.389 test_end 00:05:29.389 00:05:29.389 real 0m1.220s 00:05:29.389 user 0m1.143s 00:05:29.389 sys 0m0.074s 00:05:29.389 23:31:17 event.event_reactor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:29.389 23:31:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.389 ************************************ 00:05:29.389 END TEST event_reactor 00:05:29.389 ************************************ 00:05:29.389 23:31:17 event -- common/autotest_common.sh@1136 -- # return 0 00:05:29.389 23:31:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.389 23:31:17 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:05:29.389 23:31:17 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:29.389 23:31:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.389 ************************************ 00:05:29.389 START TEST event_reactor_perf 00:05:29.389 ************************************ 00:05:29.389 23:31:17 event.event_reactor_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.389 [2024-07-15 23:31:18.015045] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:29.389 [2024-07-15 23:31:18.015113] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830836 ] 00:05:29.389 [2024-07-15 23:31:18.073465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.389 [2024-07-15 23:31:18.146545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.326 test_start 00:05:30.326 test_end 00:05:30.326 Performance: 503837 events per second 00:05:30.326 00:05:30.326 real 0m1.222s 00:05:30.326 user 0m1.143s 00:05:30.326 sys 0m0.074s 00:05:30.326 23:31:19 event.event_reactor_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:30.326 23:31:19 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:30.326 ************************************ 00:05:30.326 END TEST event_reactor_perf 00:05:30.326 ************************************ 00:05:30.326 23:31:19 event -- common/autotest_common.sh@1136 -- # return 0 00:05:30.326 23:31:19 event -- event/event.sh@49 -- # uname -s 00:05:30.326 23:31:19 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:30.326 23:31:19 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.326 23:31:19 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:30.326 23:31:19 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:30.326 23:31:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.326 ************************************ 00:05:30.326 START TEST event_scheduler 00:05:30.326 ************************************ 00:05:30.326 23:31:19 event.event_scheduler -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:30.585 * Looking for test storage... 00:05:30.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:30.585 23:31:19 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:30.585 23:31:19 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=831114 00:05:30.585 23:31:19 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.585 23:31:19 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:30.585 23:31:19 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 831114 00:05:30.585 23:31:19 event.event_scheduler -- common/autotest_common.sh@823 -- # '[' -z 831114 ']' 00:05:30.585 23:31:19 event.event_scheduler -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.585 23:31:19 event.event_scheduler -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:30.585 23:31:19 event.event_scheduler -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.585 23:31:19 event.event_scheduler -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:30.585 23:31:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.585 [2024-07-15 23:31:19.403598] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:30.585 [2024-07-15 23:31:19.403644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831114 ] 00:05:30.585 [2024-07-15 23:31:19.456434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.585 [2024-07-15 23:31:19.535741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.585 [2024-07-15 23:31:19.535848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.585 [2024-07-15 23:31:19.535932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.585 [2024-07-15 23:31:19.535934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@856 -- # return 0 00:05:31.520 23:31:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.520 [2024-07-15 23:31:20.222289] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:31.520 [2024-07-15 23:31:20.222313] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:31.520 [2024-07-15 23:31:20.222322] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:31.520 [2024-07-15 23:31:20.222328] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:31.520 [2024-07-15 23:31:20.222333] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.520 23:31:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.520 [2024-07-15 23:31:20.294741] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.520 23:31:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:31.520 23:31:20 event.event_scheduler -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 ************************************ 00:05:31.521 START TEST scheduler_create_thread 00:05:31.521 ************************************ 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1117 -- # scheduler_create_thread 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 2 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 3 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 4 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 5 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 6 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 7 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 8 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 9 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 10 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:31.521 23:31:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 23:31:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:32.941 23:31:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.941 23:31:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.941 23:31:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:32.941 23:31:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.359 23:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:34.359 00:05:34.359 real 0m2.621s 00:05:34.359 user 0m0.022s 00:05:34.359 sys 0m0.006s 00:05:34.359 23:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:34.359 23:31:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.359 ************************************ 00:05:34.359 END TEST scheduler_create_thread 00:05:34.359 ************************************ 00:05:34.359 23:31:22 event.event_scheduler -- common/autotest_common.sh@1136 -- # return 0 00:05:34.359 23:31:22 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:34.359 23:31:22 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 831114 00:05:34.359 23:31:22 event.event_scheduler -- common/autotest_common.sh@942 -- # '[' -z 831114 ']' 00:05:34.359 23:31:22 event.event_scheduler -- common/autotest_common.sh@946 -- # kill -0 831114 00:05:34.359 23:31:22 event.event_scheduler -- common/autotest_common.sh@947 -- # uname 00:05:34.359 23:31:22 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:34.359 23:31:22 event.event_scheduler -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 831114 00:05:34.359 23:31:23 event.event_scheduler -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:05:34.359 23:31:23 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:05:34.359 23:31:23 event.event_scheduler -- common/autotest_common.sh@960 -- # echo 'killing process with pid 831114' 00:05:34.359 killing process with pid 831114 00:05:34.359 23:31:23 event.event_scheduler -- common/autotest_common.sh@961 -- # kill 831114 00:05:34.359 23:31:23 event.event_scheduler -- common/autotest_common.sh@966 -- # wait 831114 00:05:34.617 [2024-07-15 23:31:23.433002] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:34.875 00:05:34.875 real 0m4.357s 00:05:34.875 user 0m8.278s 00:05:34.875 sys 0m0.345s 00:05:34.875 23:31:23 event.event_scheduler -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:34.875 23:31:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:34.875 ************************************ 00:05:34.875 END TEST event_scheduler 00:05:34.875 ************************************ 00:05:34.875 23:31:23 event -- common/autotest_common.sh@1136 -- # return 0 00:05:34.875 23:31:23 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:34.875 23:31:23 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:34.875 23:31:23 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:34.875 23:31:23 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:34.875 23:31:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.875 ************************************ 00:05:34.875 START TEST app_repeat 00:05:34.875 ************************************ 00:05:34.875 23:31:23 event.app_repeat -- common/autotest_common.sh@1117 -- # app_repeat_test 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@19 -- # repeat_pid=831976 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 831976' 00:05:34.875 Process app_repeat pid: 831976 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:34.875 spdk_app_start Round 0 00:05:34.875 23:31:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 831976 /var/tmp/spdk-nbd.sock 00:05:34.875 23:31:23 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 831976 ']' 00:05:34.875 23:31:23 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.875 23:31:23 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:34.875 23:31:23 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.875 23:31:23 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:34.875 23:31:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.875 [2024-07-15 23:31:23.720109] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:34.875 [2024-07-15 23:31:23.720148] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831976 ] 00:05:34.875 [2024-07-15 23:31:23.773763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.133 [2024-07-15 23:31:23.855395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.133 [2024-07-15 23:31:23.855398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.133 23:31:23 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:35.133 23:31:23 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:05:35.133 23:31:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.133 Malloc0 00:05:35.391 23:31:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:35.391 Malloc1 00:05:35.391 23:31:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:35.391 23:31:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:35.392 23:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:35.392 23:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.392 23:31:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:35.650 /dev/nbd0 00:05:35.650 23:31:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.650 23:31:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.651 1+0 records in 00:05:35.651 1+0 records out 00:05:35.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220247 s, 18.6 MB/s 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:35.651 23:31:24 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:35.651 23:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.651 23:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.651 23:31:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.910 /dev/nbd1 00:05:35.910 23:31:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.910 23:31:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.910 1+0 records in 00:05:35.910 1+0 records out 00:05:35.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161593 s, 25.3 MB/s 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:35.910 23:31:24 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:35.910 23:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.910 23:31:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.910 23:31:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.910 23:31:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.910 23:31:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.169 { 00:05:36.169 "nbd_device": "/dev/nbd0", 00:05:36.169 "bdev_name": "Malloc0" 00:05:36.169 }, 00:05:36.169 { 00:05:36.169 "nbd_device": "/dev/nbd1", 00:05:36.169 "bdev_name": "Malloc1" 00:05:36.169 } 00:05:36.169 ]' 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.169 { 00:05:36.169 "nbd_device": "/dev/nbd0", 00:05:36.169 "bdev_name": "Malloc0" 00:05:36.169 }, 00:05:36.169 { 00:05:36.169 "nbd_device": "/dev/nbd1", 00:05:36.169 "bdev_name": "Malloc1" 00:05:36.169 } 00:05:36.169 ]' 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.169 /dev/nbd1' 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.169 /dev/nbd1' 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.169 256+0 records in 00:05:36.169 256+0 records out 00:05:36.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103385 s, 101 MB/s 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.169 256+0 records in 00:05:36.169 256+0 records out 00:05:36.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136584 s, 76.8 MB/s 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.169 23:31:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.169 256+0 records in 00:05:36.169 256+0 records out 00:05:36.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149298 s, 70.2 MB/s 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.169 23:31:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.428 23:31:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.687 23:31:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.687 23:31:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.947 23:31:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.206 [2024-07-15 23:31:26.015106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.206 [2024-07-15 23:31:26.082484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.206 [2024-07-15 23:31:26.082487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.206 [2024-07-15 23:31:26.123112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.206 [2024-07-15 23:31:26.123152] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.491 23:31:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.491 23:31:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:40.491 spdk_app_start Round 1 00:05:40.491 23:31:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 831976 /var/tmp/spdk-nbd.sock 00:05:40.491 23:31:28 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 831976 ']' 00:05:40.491 23:31:28 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.491 23:31:28 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:40.491 23:31:28 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.491 23:31:28 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:40.491 23:31:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.491 23:31:29 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:40.491 23:31:29 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:05:40.491 23:31:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.491 Malloc0 00:05:40.491 23:31:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.491 Malloc1 00:05:40.491 23:31:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.491 23:31:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.749 /dev/nbd0 00:05:40.749 23:31:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.749 23:31:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.749 1+0 records in 00:05:40.749 1+0 records out 00:05:40.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212129 s, 19.3 MB/s 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:40.749 23:31:29 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:40.749 23:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.749 23:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.749 23:31:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.008 /dev/nbd1 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.008 1+0 records in 00:05:41.008 1+0 records out 00:05:41.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183453 s, 22.3 MB/s 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:41.008 23:31:29 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.008 { 00:05:41.008 "nbd_device": "/dev/nbd0", 00:05:41.008 "bdev_name": "Malloc0" 00:05:41.008 }, 00:05:41.008 { 00:05:41.008 "nbd_device": "/dev/nbd1", 00:05:41.008 "bdev_name": "Malloc1" 00:05:41.008 } 00:05:41.008 ]' 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.008 { 00:05:41.008 "nbd_device": "/dev/nbd0", 00:05:41.008 "bdev_name": "Malloc0" 00:05:41.008 }, 00:05:41.008 { 00:05:41.008 "nbd_device": "/dev/nbd1", 00:05:41.008 "bdev_name": "Malloc1" 00:05:41.008 } 00:05:41.008 ]' 00:05:41.008 23:31:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.268 /dev/nbd1' 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.268 /dev/nbd1' 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.268 23:31:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.268 256+0 records in 00:05:41.268 256+0 records out 00:05:41.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103507 s, 101 MB/s 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.268 256+0 records in 00:05:41.268 256+0 records out 00:05:41.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141185 s, 74.3 MB/s 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.268 256+0 records in 00:05:41.268 256+0 records out 00:05:41.268 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152718 s, 68.7 MB/s 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.268 23:31:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.527 23:31:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.785 23:31:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.785 23:31:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.044 23:31:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.302 [2024-07-15 23:31:31.064033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.302 [2024-07-15 23:31:31.130531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.302 [2024-07-15 23:31:31.130534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.302 [2024-07-15 23:31:31.172022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.302 [2024-07-15 23:31:31.172063] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.581 23:31:33 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.581 23:31:33 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.581 spdk_app_start Round 2 00:05:45.581 23:31:33 event.app_repeat -- event/event.sh@25 -- # waitforlisten 831976 /var/tmp/spdk-nbd.sock 00:05:45.581 23:31:33 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 831976 ']' 00:05:45.581 23:31:33 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.581 23:31:33 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:45.581 23:31:33 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.581 23:31:33 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:45.581 23:31:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.581 23:31:34 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:45.581 23:31:34 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:05:45.581 23:31:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.581 Malloc0 00:05:45.581 23:31:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.581 Malloc1 00:05:45.581 23:31:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.581 23:31:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.581 23:31:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.581 23:31:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.581 23:31:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.581 23:31:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.581 23:31:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.581 23:31:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.582 23:31:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.841 /dev/nbd0 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.841 1+0 records in 00:05:45.841 1+0 records out 00:05:45.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206808 s, 19.8 MB/s 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.841 /dev/nbd1 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.841 1+0 records in 00:05:45.841 1+0 records out 00:05:45.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215811 s, 19.0 MB/s 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:05:45.841 23:31:34 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.841 23:31:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.099 23:31:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.099 { 00:05:46.099 "nbd_device": "/dev/nbd0", 00:05:46.099 "bdev_name": "Malloc0" 00:05:46.099 }, 00:05:46.099 { 00:05:46.099 "nbd_device": "/dev/nbd1", 00:05:46.099 "bdev_name": "Malloc1" 00:05:46.099 } 00:05:46.099 ]' 00:05:46.099 23:31:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.099 23:31:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.099 { 00:05:46.099 "nbd_device": "/dev/nbd0", 00:05:46.099 "bdev_name": "Malloc0" 00:05:46.099 }, 00:05:46.099 { 00:05:46.099 "nbd_device": "/dev/nbd1", 00:05:46.099 "bdev_name": "Malloc1" 00:05:46.099 } 00:05:46.099 ]' 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.099 /dev/nbd1' 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.099 /dev/nbd1' 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.099 256+0 records in 00:05:46.099 256+0 records out 00:05:46.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103379 s, 101 MB/s 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.099 256+0 records in 00:05:46.099 256+0 records out 00:05:46.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146751 s, 71.5 MB/s 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.099 23:31:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.357 256+0 records in 00:05:46.357 256+0 records out 00:05:46.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147155 s, 71.3 MB/s 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.357 23:31:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.614 23:31:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.873 23:31:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.873 23:31:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.131 23:31:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.131 [2024-07-15 23:31:36.084144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.390 [2024-07-15 23:31:36.152914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.390 [2024-07-15 23:31:36.152916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.390 [2024-07-15 23:31:36.193848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.390 [2024-07-15 23:31:36.193887] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.669 23:31:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 831976 /var/tmp/spdk-nbd.sock 00:05:50.669 23:31:38 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 831976 ']' 00:05:50.669 23:31:38 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.669 23:31:38 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:50.669 23:31:38 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.669 23:31:38 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:50.669 23:31:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:05:50.669 23:31:39 event.app_repeat -- event/event.sh@39 -- # killprocess 831976 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@942 -- # '[' -z 831976 ']' 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@946 -- # kill -0 831976 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@947 -- # uname 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 831976 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@960 -- # echo 'killing process with pid 831976' 00:05:50.669 killing process with pid 831976 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@961 -- # kill 831976 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@966 -- # wait 831976 00:05:50.669 spdk_app_start is called in Round 0. 00:05:50.669 Shutdown signal received, stop current app iteration 00:05:50.669 Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 reinitialization... 00:05:50.669 spdk_app_start is called in Round 1. 00:05:50.669 Shutdown signal received, stop current app iteration 00:05:50.669 Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 reinitialization... 00:05:50.669 spdk_app_start is called in Round 2. 00:05:50.669 Shutdown signal received, stop current app iteration 00:05:50.669 Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 reinitialization... 00:05:50.669 spdk_app_start is called in Round 3. 00:05:50.669 Shutdown signal received, stop current app iteration 00:05:50.669 23:31:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.669 23:31:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.669 00:05:50.669 real 0m15.585s 00:05:50.669 user 0m33.762s 00:05:50.669 sys 0m2.276s 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:50.669 23:31:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.669 ************************************ 00:05:50.669 END TEST app_repeat 00:05:50.669 ************************************ 00:05:50.669 23:31:39 event -- common/autotest_common.sh@1136 -- # return 0 00:05:50.669 23:31:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:50.669 23:31:39 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.669 23:31:39 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:50.669 23:31:39 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:50.669 23:31:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.669 ************************************ 00:05:50.669 START TEST cpu_locks 00:05:50.669 ************************************ 00:05:50.669 23:31:39 event.cpu_locks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:50.669 * Looking for test storage... 00:05:50.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:50.669 23:31:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:50.669 23:31:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:50.669 23:31:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:50.669 23:31:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:50.669 23:31:39 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:50.669 23:31:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:50.669 23:31:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.669 ************************************ 00:05:50.669 START TEST default_locks 00:05:50.669 ************************************ 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1117 -- # default_locks 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=834835 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 834835 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 834835 ']' 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:50.669 23:31:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.669 [2024-07-15 23:31:39.526559] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:50.669 [2024-07-15 23:31:39.526606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834835 ] 00:05:50.669 [2024-07-15 23:31:39.579452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.927 [2024-07-15 23:31:39.659604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.530 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:51.530 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 0 00:05:51.530 23:31:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 834835 00:05:51.530 23:31:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 834835 00:05:51.530 23:31:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.787 lslocks: write error 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 834835 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@942 -- # '[' -z 834835 ']' 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # kill -0 834835 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # uname 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 834835 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:51.787 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 834835' 00:05:51.788 killing process with pid 834835 00:05:51.788 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # kill 834835 00:05:51.788 23:31:40 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # wait 834835 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 834835 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # local es=0 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 834835 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # waitforlisten 834835 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 834835 ']' 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (834835) - No such process 00:05:52.352 ERROR: process (pid: 834835) is no longer running 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 1 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # es=1 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:52.352 00:05:52.352 real 0m1.561s 00:05:52.352 user 0m1.629s 00:05:52.352 sys 0m0.534s 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:52.352 23:31:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.352 ************************************ 00:05:52.352 END TEST default_locks 00:05:52.352 ************************************ 00:05:52.352 23:31:41 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:52.352 23:31:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:52.352 23:31:41 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:52.352 23:31:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:52.352 23:31:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.352 ************************************ 00:05:52.352 START TEST default_locks_via_rpc 00:05:52.352 ************************************ 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1117 -- # default_locks_via_rpc 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=835101 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 835101 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 835101 ']' 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:52.352 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.352 [2024-07-15 23:31:41.153356] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:52.352 [2024-07-15 23:31:41.153399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835101 ] 00:05:52.352 [2024-07-15 23:31:41.207353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.352 [2024-07-15 23:31:41.279443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 835101 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 835101 00:05:53.311 23:31:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 835101 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@942 -- # '[' -z 835101 ']' 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # kill -0 835101 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # uname 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 835101 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 835101' 00:05:53.311 killing process with pid 835101 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # kill 835101 00:05:53.311 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # wait 835101 00:05:53.569 00:05:53.569 real 0m1.335s 00:05:53.569 user 0m1.394s 00:05:53.569 sys 0m0.402s 00:05:53.569 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:53.569 23:31:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.569 ************************************ 00:05:53.569 END TEST default_locks_via_rpc 00:05:53.569 ************************************ 00:05:53.569 23:31:42 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:53.569 23:31:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:53.569 23:31:42 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:53.569 23:31:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:53.569 23:31:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.569 ************************************ 00:05:53.569 START TEST non_locking_app_on_locked_coremask 00:05:53.569 ************************************ 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # non_locking_app_on_locked_coremask 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=835362 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 835362 /var/tmp/spdk.sock 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 835362 ']' 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.569 23:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.827 [2024-07-15 23:31:42.552311] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:53.827 [2024-07-15 23:31:42.552356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835362 ] 00:05:53.827 [2024-07-15 23:31:42.604670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.827 [2024-07-15 23:31:42.684070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=835592 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 835592 /var/tmp/spdk2.sock 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 835592 ']' 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:54.395 23:31:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.653 [2024-07-15 23:31:43.376716] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:54.653 [2024-07-15 23:31:43.376765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835592 ] 00:05:54.654 [2024-07-15 23:31:43.449065] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:54.654 [2024-07-15 23:31:43.449090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.654 [2024-07-15 23:31:43.599431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.219 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:55.219 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:55.219 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 835362 00:05:55.219 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.219 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 835362 00:05:56.153 lslocks: write error 00:05:56.153 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 835362 00:05:56.153 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 835362 ']' 00:05:56.153 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 835362 00:05:56.153 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:56.153 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:56.153 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 835362 00:05:56.154 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:56.154 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:56.154 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 835362' 00:05:56.154 killing process with pid 835362 00:05:56.154 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 835362 00:05:56.154 23:31:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 835362 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 835592 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 835592 ']' 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 835592 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 835592 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 835592' 00:05:56.720 killing process with pid 835592 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 835592 00:05:56.720 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 835592 00:05:56.979 00:05:56.979 real 0m3.286s 00:05:56.979 user 0m3.483s 00:05:56.979 sys 0m0.930s 00:05:56.979 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:56.979 23:31:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.979 ************************************ 00:05:56.979 END TEST non_locking_app_on_locked_coremask 00:05:56.979 ************************************ 00:05:56.979 23:31:45 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:05:56.979 23:31:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:56.979 23:31:45 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:56.979 23:31:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:56.979 23:31:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.979 ************************************ 00:05:56.979 START TEST locking_app_on_unlocked_coremask 00:05:56.979 ************************************ 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_unlocked_coremask 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=836085 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 836085 /var/tmp/spdk.sock 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 836085 ']' 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:56.979 23:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.979 [2024-07-15 23:31:45.889638] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:56.979 [2024-07-15 23:31:45.889679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836085 ] 00:05:56.979 [2024-07-15 23:31:45.942307] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.979 [2024-07-15 23:31:45.942333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.237 [2024-07-15 23:31:46.023374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=836099 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 836099 /var/tmp/spdk2.sock 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 836099 ']' 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.804 23:31:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.804 [2024-07-15 23:31:46.738377] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:57.804 [2024-07-15 23:31:46.738427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836099 ] 00:05:58.062 [2024-07-15 23:31:46.813918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.062 [2024-07-15 23:31:46.959530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.628 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:58.628 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:05:58.628 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 836099 00:05:58.628 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 836099 00:05:58.628 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.886 lslocks: write error 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 836085 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 836085 ']' 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 836085 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 836085 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 836085' 00:05:58.886 killing process with pid 836085 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 836085 00:05:58.886 23:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 836085 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 836099 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 836099 ']' 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 836099 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 836099 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 836099' 00:05:59.823 killing process with pid 836099 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 836099 00:05:59.823 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 836099 00:06:00.082 00:06:00.082 real 0m2.961s 00:06:00.082 user 0m3.180s 00:06:00.082 sys 0m0.786s 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.082 ************************************ 00:06:00.082 END TEST locking_app_on_unlocked_coremask 00:06:00.082 ************************************ 00:06:00.082 23:31:48 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:00.082 23:31:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:00.082 23:31:48 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:00.082 23:31:48 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:00.082 23:31:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.082 ************************************ 00:06:00.082 START TEST locking_app_on_locked_coremask 00:06:00.082 ************************************ 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_locked_coremask 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=836587 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 836587 /var/tmp/spdk.sock 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 836587 ']' 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:00.082 23:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.082 [2024-07-15 23:31:48.910608] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:00.082 [2024-07-15 23:31:48.910647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836587 ] 00:06:00.082 [2024-07-15 23:31:48.964802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.082 [2024-07-15 23:31:49.044748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=836708 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 836708 /var/tmp/spdk2.sock 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # local es=0 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 836708 /var/tmp/spdk2.sock 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # waitforlisten 836708 /var/tmp/spdk2.sock 00:06:01.017 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 836708 ']' 00:06:01.018 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.018 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:01.018 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.018 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:01.018 23:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.018 [2024-07-15 23:31:49.763163] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:01.018 [2024-07-15 23:31:49.763212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836708 ] 00:06:01.018 [2024-07-15 23:31:49.840068] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 836587 has claimed it. 00:06:01.018 [2024-07-15 23:31:49.840106] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:01.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (836708) - No such process 00:06:01.583 ERROR: process (pid: 836708) is no longer running 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 1 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # es=1 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 836587 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 836587 00:06:01.583 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.841 lslocks: write error 00:06:01.841 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 836587 00:06:01.841 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 836587 ']' 00:06:01.841 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 836587 00:06:01.841 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:02.100 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:02.100 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 836587 00:06:02.100 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:02.100 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:02.100 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 836587' 00:06:02.100 killing process with pid 836587 00:06:02.100 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 836587 00:06:02.100 23:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 836587 00:06:02.358 00:06:02.358 real 0m2.292s 00:06:02.358 user 0m2.548s 00:06:02.358 sys 0m0.602s 00:06:02.358 23:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:02.358 23:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.358 ************************************ 00:06:02.358 END TEST locking_app_on_locked_coremask 00:06:02.359 ************************************ 00:06:02.359 23:31:51 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:02.359 23:31:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:02.359 23:31:51 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:02.359 23:31:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:02.359 23:31:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.359 ************************************ 00:06:02.359 START TEST locking_overlapped_coremask 00:06:02.359 ************************************ 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=837072 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 837072 /var/tmp/spdk.sock 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 837072 ']' 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:02.359 23:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.359 [2024-07-15 23:31:51.280318] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:02.359 [2024-07-15 23:31:51.280364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837072 ] 00:06:02.618 [2024-07-15 23:31:51.334841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.618 [2024-07-15 23:31:51.407480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.618 [2024-07-15 23:31:51.407578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.618 [2024-07-15 23:31:51.407580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=837097 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 837097 /var/tmp/spdk2.sock 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # local es=0 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 837097 /var/tmp/spdk2.sock 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # waitforlisten 837097 /var/tmp/spdk2.sock 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 837097 ']' 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:03.184 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.184 [2024-07-15 23:31:52.118706] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:03.184 [2024-07-15 23:31:52.118753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837097 ] 00:06:03.442 [2024-07-15 23:31:52.192308] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 837072 has claimed it. 00:06:03.442 [2024-07-15 23:31:52.192348] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (837097) - No such process 00:06:04.008 ERROR: process (pid: 837097) is no longer running 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 1 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # es=1 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.008 23:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 837072 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@942 -- # '[' -z 837072 ']' 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # kill -0 837072 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # uname 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 837072 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 837072' 00:06:04.009 killing process with pid 837072 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # kill 837072 00:06:04.009 23:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # wait 837072 00:06:04.268 00:06:04.268 real 0m1.876s 00:06:04.268 user 0m5.290s 00:06:04.268 sys 0m0.399s 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.268 ************************************ 00:06:04.268 END TEST locking_overlapped_coremask 00:06:04.268 ************************************ 00:06:04.268 23:31:53 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:04.268 23:31:53 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.268 23:31:53 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:04.268 23:31:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:04.268 23:31:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.268 ************************************ 00:06:04.268 START TEST locking_overlapped_coremask_via_rpc 00:06:04.268 ************************************ 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask_via_rpc 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=837353 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 837353 /var/tmp/spdk.sock 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 837353 ']' 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:04.268 23:31:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.268 [2024-07-15 23:31:53.220647] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:04.268 [2024-07-15 23:31:53.220690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837353 ] 00:06:04.526 [2024-07-15 23:31:53.274104] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.526 [2024-07-15 23:31:53.274129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:04.526 [2024-07-15 23:31:53.344063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.526 [2024-07-15 23:31:53.344160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.526 [2024-07-15 23:31:53.344160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=837583 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 837583 /var/tmp/spdk2.sock 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 837583 ']' 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:05.091 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.350 [2024-07-15 23:31:54.070935] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:05.350 [2024-07-15 23:31:54.070985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837583 ] 00:06:05.350 [2024-07-15 23:31:54.148418] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.350 [2024-07-15 23:31:54.148449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.350 [2024-07-15 23:31:54.300177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.350 [2024-07-15 23:31:54.300293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.350 [2024-07-15 23:31:54.300294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # local es=0 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:06:05.917 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:05.918 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:06:05.918 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:05.918 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.918 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:05.918 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.918 [2024-07-15 23:31:54.884297] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 837353 has claimed it. 00:06:06.176 request: 00:06:06.176 { 00:06:06.176 "method": "framework_enable_cpumask_locks", 00:06:06.176 "req_id": 1 00:06:06.176 } 00:06:06.176 Got JSON-RPC error response 00:06:06.176 response: 00:06:06.176 { 00:06:06.176 "code": -32603, 00:06:06.176 "message": "Failed to claim CPU core: 2" 00:06:06.176 } 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # es=1 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 837353 /var/tmp/spdk.sock 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 837353 ']' 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:06.176 23:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 837583 /var/tmp/spdk2.sock 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 837583 ']' 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:06.176 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.470 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:06.470 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:06.471 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.471 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.471 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.471 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.471 00:06:06.471 real 0m2.106s 00:06:06.471 user 0m0.867s 00:06:06.471 sys 0m0.175s 00:06:06.471 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:06.471 23:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.471 ************************************ 00:06:06.471 END TEST locking_overlapped_coremask_via_rpc 00:06:06.471 ************************************ 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:06.471 23:31:55 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.471 23:31:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 837353 ]] 00:06:06.471 23:31:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 837353 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 837353 ']' 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 837353 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 837353 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 837353' 00:06:06.471 killing process with pid 837353 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 837353 00:06:06.471 23:31:55 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 837353 00:06:06.730 23:31:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 837583 ]] 00:06:06.730 23:31:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 837583 00:06:06.730 23:31:55 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 837583 ']' 00:06:06.730 23:31:55 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 837583 00:06:06.730 23:31:55 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:06:06.730 23:31:55 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:06.730 23:31:55 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 837583 00:06:06.989 23:31:55 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:06:06.989 23:31:55 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:06:06.989 23:31:55 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 837583' 00:06:06.989 killing process with pid 837583 00:06:06.989 23:31:55 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 837583 00:06:06.989 23:31:55 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 837583 00:06:07.248 23:31:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.248 23:31:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:07.248 23:31:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 837353 ]] 00:06:07.248 23:31:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 837353 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 837353 ']' 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 837353 00:06:07.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (837353) - No such process 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 837353 is not found' 00:06:07.248 Process with pid 837353 is not found 00:06:07.248 23:31:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 837583 ]] 00:06:07.248 23:31:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 837583 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 837583 ']' 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 837583 00:06:07.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (837583) - No such process 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 837583 is not found' 00:06:07.248 Process with pid 837583 is not found 00:06:07.248 23:31:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.248 00:06:07.248 real 0m16.687s 00:06:07.248 user 0m28.943s 00:06:07.248 sys 0m4.700s 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:07.248 23:31:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.248 ************************************ 00:06:07.248 END TEST cpu_locks 00:06:07.248 ************************************ 00:06:07.248 23:31:56 event -- common/autotest_common.sh@1136 -- # return 0 00:06:07.248 00:06:07.248 real 0m40.764s 00:06:07.248 user 1m17.594s 00:06:07.248 sys 0m7.866s 00:06:07.248 23:31:56 event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:07.248 23:31:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.248 ************************************ 00:06:07.248 END TEST event 00:06:07.248 ************************************ 00:06:07.248 23:31:56 -- common/autotest_common.sh@1136 -- # return 0 00:06:07.248 23:31:56 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.248 23:31:56 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:07.248 23:31:56 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:07.248 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.248 ************************************ 00:06:07.248 START TEST thread 00:06:07.248 ************************************ 00:06:07.248 23:31:56 thread -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.248 * Looking for test storage... 00:06:07.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:07.248 23:31:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.248 23:31:56 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:07.248 23:31:56 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:07.248 23:31:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.507 ************************************ 00:06:07.507 START TEST thread_poller_perf 00:06:07.507 ************************************ 00:06:07.507 23:31:56 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.507 [2024-07-15 23:31:56.258663] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:07.507 [2024-07-15 23:31:56.258737] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837925 ] 00:06:07.507 [2024-07-15 23:31:56.315953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.507 [2024-07-15 23:31:56.391153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.507 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.884 ====================================== 00:06:08.884 busy:2307818610 (cyc) 00:06:08.884 total_run_count: 411000 00:06:08.884 tsc_hz: 2300000000 (cyc) 00:06:08.884 ====================================== 00:06:08.884 poller_cost: 5615 (cyc), 2441 (nsec) 00:06:08.884 00:06:08.884 real 0m1.226s 00:06:08.884 user 0m1.147s 00:06:08.884 sys 0m0.074s 00:06:08.884 23:31:57 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:08.884 23:31:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.884 ************************************ 00:06:08.884 END TEST thread_poller_perf 00:06:08.884 ************************************ 00:06:08.884 23:31:57 thread -- common/autotest_common.sh@1136 -- # return 0 00:06:08.884 23:31:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.884 23:31:57 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:08.884 23:31:57 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:08.884 23:31:57 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.885 ************************************ 00:06:08.885 START TEST thread_poller_perf 00:06:08.885 ************************************ 00:06:08.885 23:31:57 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.885 [2024-07-15 23:31:57.551633] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:08.885 [2024-07-15 23:31:57.551701] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838176 ] 00:06:08.885 [2024-07-15 23:31:57.608720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.885 [2024-07-15 23:31:57.681602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.885 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:09.823 ====================================== 00:06:09.823 busy:2301514976 (cyc) 00:06:09.823 total_run_count: 5392000 00:06:09.823 tsc_hz: 2300000000 (cyc) 00:06:09.823 ====================================== 00:06:09.823 poller_cost: 426 (cyc), 185 (nsec) 00:06:09.823 00:06:09.823 real 0m1.218s 00:06:09.823 user 0m1.143s 00:06:09.823 sys 0m0.071s 00:06:09.823 23:31:58 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:09.823 23:31:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.823 ************************************ 00:06:09.823 END TEST thread_poller_perf 00:06:09.823 ************************************ 00:06:09.823 23:31:58 thread -- common/autotest_common.sh@1136 -- # return 0 00:06:09.823 23:31:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.823 00:06:09.823 real 0m2.654s 00:06:09.823 user 0m2.366s 00:06:09.823 sys 0m0.296s 00:06:09.823 23:31:58 thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:09.823 23:31:58 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.823 ************************************ 00:06:09.823 END TEST thread 00:06:09.823 ************************************ 00:06:10.082 23:31:58 -- common/autotest_common.sh@1136 -- # return 0 00:06:10.082 23:31:58 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:10.082 23:31:58 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:10.082 23:31:58 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:10.082 23:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:10.082 ************************************ 00:06:10.083 START TEST accel 00:06:10.083 ************************************ 00:06:10.083 23:31:58 accel -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:10.083 * Looking for test storage... 00:06:10.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:10.083 23:31:58 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:10.083 23:31:58 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:10.083 23:31:58 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.083 23:31:58 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=838461 00:06:10.083 23:31:58 accel -- accel/accel.sh@63 -- # waitforlisten 838461 00:06:10.083 23:31:58 accel -- common/autotest_common.sh@823 -- # '[' -z 838461 ']' 00:06:10.083 23:31:58 accel -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.083 23:31:58 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:10.083 23:31:58 accel -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:10.083 23:31:58 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:10.083 23:31:58 accel -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.083 23:31:58 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.083 23:31:58 accel -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:10.083 23:31:58 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.083 23:31:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.083 23:31:58 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.083 23:31:58 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.083 23:31:58 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.083 23:31:58 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:10.083 23:31:58 accel -- accel/accel.sh@41 -- # jq -r . 00:06:10.083 [2024-07-15 23:31:58.989678] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:10.083 [2024-07-15 23:31:58.989723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838461 ] 00:06:10.083 [2024-07-15 23:31:59.043685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.342 [2024-07-15 23:31:59.119211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.917 23:31:59 accel -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:10.917 23:31:59 accel -- common/autotest_common.sh@856 -- # return 0 00:06:10.917 23:31:59 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:10.917 23:31:59 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:10.917 23:31:59 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:10.917 23:31:59 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:10.917 23:31:59 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:10.917 23:31:59 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:10.917 23:31:59 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:10.917 23:31:59 accel -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:10.917 23:31:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.917 23:31:59 accel -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.917 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.917 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.917 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.918 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.918 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.918 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.918 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.918 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.918 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.918 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.918 23:31:59 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.918 23:31:59 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.918 23:31:59 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.918 23:31:59 accel -- accel/accel.sh@75 -- # killprocess 838461 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@942 -- # '[' -z 838461 ']' 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@946 -- # kill -0 838461 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@947 -- # uname 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 838461 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@960 -- # echo 'killing process with pid 838461' 00:06:10.918 killing process with pid 838461 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@961 -- # kill 838461 00:06:10.918 23:31:59 accel -- common/autotest_common.sh@966 -- # wait 838461 00:06:11.482 23:32:00 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:11.482 23:32:00 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:11.482 23:32:00 accel -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:11.482 23:32:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:11.482 23:32:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.482 23:32:00 accel.accel_help -- common/autotest_common.sh@1117 -- # accel_perf -h 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:11.482 23:32:00 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:11.482 23:32:00 accel.accel_help -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:11.482 23:32:00 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:11.482 23:32:00 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:11.482 23:32:00 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:11.482 23:32:00 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:11.482 23:32:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:11.482 23:32:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.482 ************************************ 00:06:11.482 START TEST accel_missing_filename 00:06:11.482 ************************************ 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # local es=0 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:11.482 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:11.482 23:32:00 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:11.482 [2024-07-15 23:32:00.337578] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:11.482 [2024-07-15 23:32:00.337632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838733 ] 00:06:11.482 [2024-07-15 23:32:00.391654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.739 [2024-07-15 23:32:00.465065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.739 [2024-07-15 23:32:00.506384] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.739 [2024-07-15 23:32:00.566102] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:11.739 A filename is required. 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # es=234 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # es=106 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # case "$es" in 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=1 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:11.739 00:06:11.739 real 0m0.325s 00:06:11.739 user 0m0.247s 00:06:11.739 sys 0m0.114s 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:11.739 23:32:00 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:11.739 ************************************ 00:06:11.739 END TEST accel_missing_filename 00:06:11.739 ************************************ 00:06:11.739 23:32:00 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:11.739 23:32:00 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.739 23:32:00 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:06:11.739 23:32:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:11.739 23:32:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.739 ************************************ 00:06:11.739 START TEST accel_compress_verify 00:06:11.739 ************************************ 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # local es=0 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:11.739 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:11.739 23:32:00 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:11.739 [2024-07-15 23:32:00.699880] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:11.739 [2024-07-15 23:32:00.699918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838897 ] 00:06:11.997 [2024-07-15 23:32:00.748210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.997 [2024-07-15 23:32:00.823915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.997 [2024-07-15 23:32:00.864974] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.997 [2024-07-15 23:32:00.924744] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:12.256 00:06:12.256 Compression does not support the verify option, aborting. 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # es=161 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # es=33 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # case "$es" in 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=1 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:12.256 00:06:12.256 real 0m0.312s 00:06:12.256 user 0m0.243s 00:06:12.256 sys 0m0.106s 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:12.256 23:32:00 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:12.256 ************************************ 00:06:12.256 END TEST accel_compress_verify 00:06:12.256 ************************************ 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:12.256 23:32:01 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.256 ************************************ 00:06:12.256 START TEST accel_wrong_workload 00:06:12.256 ************************************ 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w foobar 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # local es=0 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w foobar 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:12.256 23:32:01 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:12.256 Unsupported workload type: foobar 00:06:12.256 [2024-07-15 23:32:01.062802] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:12.256 accel_perf options: 00:06:12.256 [-h help message] 00:06:12.256 [-q queue depth per core] 00:06:12.256 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:12.256 [-T number of threads per core 00:06:12.256 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:12.256 [-t time in seconds] 00:06:12.256 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:12.256 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:12.256 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:12.256 [-l for compress/decompress workloads, name of uncompressed input file 00:06:12.256 [-S for crc32c workload, use this seed value (default 0) 00:06:12.256 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:12.256 [-f for fill workload, use this BYTE value (default 255) 00:06:12.256 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:12.256 [-y verify result if this switch is on] 00:06:12.256 [-a tasks to allocate per core (default: same value as -q)] 00:06:12.256 Can be used to spread operations across a wider range of memory. 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # es=1 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:12.256 00:06:12.256 real 0m0.029s 00:06:12.256 user 0m0.015s 00:06:12.256 sys 0m0.014s 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:12.256 23:32:01 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:12.256 ************************************ 00:06:12.256 END TEST accel_wrong_workload 00:06:12.256 ************************************ 00:06:12.256 Error: writing output failed: Broken pipe 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:12.256 23:32:01 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:12.256 23:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.256 ************************************ 00:06:12.256 START TEST accel_negative_buffers 00:06:12.256 ************************************ 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # local es=0 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:12.256 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w xor -y -x -1 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:12.256 23:32:01 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:12.256 -x option must be non-negative. 00:06:12.256 [2024-07-15 23:32:01.135672] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:12.256 accel_perf options: 00:06:12.256 [-h help message] 00:06:12.256 [-q queue depth per core] 00:06:12.256 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:12.256 [-T number of threads per core 00:06:12.256 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:12.256 [-t time in seconds] 00:06:12.256 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:12.256 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:12.256 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:12.256 [-l for compress/decompress workloads, name of uncompressed input file 00:06:12.256 [-S for crc32c workload, use this seed value (default 0) 00:06:12.256 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:12.256 [-f for fill workload, use this BYTE value (default 255) 00:06:12.256 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:12.256 [-y verify result if this switch is on] 00:06:12.256 [-a tasks to allocate per core (default: same value as -q)] 00:06:12.256 Can be used to spread operations across a wider range of memory. 00:06:12.257 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # es=1 00:06:12.257 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:12.257 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:12.257 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:12.257 00:06:12.257 real 0m0.029s 00:06:12.257 user 0m0.019s 00:06:12.257 sys 0m0.010s 00:06:12.257 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:12.257 23:32:01 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:12.257 ************************************ 00:06:12.257 END TEST accel_negative_buffers 00:06:12.257 ************************************ 00:06:12.257 Error: writing output failed: Broken pipe 00:06:12.257 23:32:01 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:12.257 23:32:01 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:12.257 23:32:01 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:12.257 23:32:01 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:12.257 23:32:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.257 ************************************ 00:06:12.257 START TEST accel_crc32c 00:06:12.257 ************************************ 00:06:12.257 23:32:01 accel.accel_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:12.257 23:32:01 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:12.257 [2024-07-15 23:32:01.209908] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:12.257 [2024-07-15 23:32:01.209962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839038 ] 00:06:12.514 [2024-07-15 23:32:01.266363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.514 [2024-07-15 23:32:01.338794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.514 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.515 23:32:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:13.887 23:32:02 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.887 00:06:13.887 real 0m1.330s 00:06:13.887 user 0m1.227s 00:06:13.887 sys 0m0.108s 00:06:13.887 23:32:02 accel.accel_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:13.887 23:32:02 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:13.887 ************************************ 00:06:13.887 END TEST accel_crc32c 00:06:13.887 ************************************ 00:06:13.887 23:32:02 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:13.887 23:32:02 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:13.887 23:32:02 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:13.887 23:32:02 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:13.887 23:32:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.887 ************************************ 00:06:13.887 START TEST accel_crc32c_C2 00:06:13.887 ************************************ 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:13.887 [2024-07-15 23:32:02.599810] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:13.887 [2024-07-15 23:32:02.599878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839285 ] 00:06:13.887 [2024-07-15 23:32:02.654758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.887 [2024-07-15 23:32:02.727824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:13.887 23:32:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.262 00:06:15.262 real 0m1.332s 00:06:15.262 user 0m1.214s 00:06:15.262 sys 0m0.122s 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:15.262 23:32:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:15.262 ************************************ 00:06:15.262 END TEST accel_crc32c_C2 00:06:15.262 ************************************ 00:06:15.262 23:32:03 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:15.262 23:32:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:15.262 23:32:03 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:15.262 23:32:03 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:15.262 23:32:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.262 ************************************ 00:06:15.262 START TEST accel_copy 00:06:15.262 ************************************ 00:06:15.263 23:32:03 accel.accel_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy -y 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:15.263 23:32:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:15.263 [2024-07-15 23:32:03.985469] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:15.263 [2024-07-15 23:32:03.985517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839533 ] 00:06:15.263 [2024-07-15 23:32:04.039604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.263 [2024-07-15 23:32:04.111823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.263 23:32:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:16.641 23:32:05 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.641 00:06:16.641 real 0m1.328s 00:06:16.641 user 0m1.214s 00:06:16.641 sys 0m0.119s 00:06:16.641 23:32:05 accel.accel_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:16.641 23:32:05 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:16.641 ************************************ 00:06:16.641 END TEST accel_copy 00:06:16.641 ************************************ 00:06:16.641 23:32:05 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:16.641 23:32:05 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.641 23:32:05 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:16.641 23:32:05 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:16.641 23:32:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.641 ************************************ 00:06:16.641 START TEST accel_fill 00:06:16.641 ************************************ 00:06:16.641 23:32:05 accel.accel_fill -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.641 23:32:05 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:16.642 [2024-07-15 23:32:05.369707] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:16.642 [2024-07-15 23:32:05.369755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839784 ] 00:06:16.642 [2024-07-15 23:32:05.423550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.642 [2024-07-15 23:32:05.496098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.642 23:32:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:18.020 23:32:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.020 00:06:18.020 real 0m1.328s 00:06:18.020 user 0m1.214s 00:06:18.020 sys 0m0.119s 00:06:18.020 23:32:06 accel.accel_fill -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:18.020 23:32:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:18.020 ************************************ 00:06:18.020 END TEST accel_fill 00:06:18.020 ************************************ 00:06:18.020 23:32:06 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:18.020 23:32:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:18.020 23:32:06 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:18.020 23:32:06 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:18.020 23:32:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.020 ************************************ 00:06:18.020 START TEST accel_copy_crc32c 00:06:18.020 ************************************ 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:18.020 [2024-07-15 23:32:06.748072] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:18.020 [2024-07-15 23:32:06.748120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840031 ] 00:06:18.020 [2024-07-15 23:32:06.802052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.020 [2024-07-15 23:32:06.874850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:18.020 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.021 23:32:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.398 00:06:19.398 real 0m1.323s 00:06:19.398 user 0m1.216s 00:06:19.398 sys 0m0.112s 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:19.398 23:32:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:19.398 ************************************ 00:06:19.398 END TEST accel_copy_crc32c 00:06:19.398 ************************************ 00:06:19.398 23:32:08 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:19.398 23:32:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:19.398 23:32:08 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:19.398 23:32:08 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:19.398 23:32:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.398 ************************************ 00:06:19.398 START TEST accel_copy_crc32c_C2 00:06:19.398 ************************************ 00:06:19.398 23:32:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:19.398 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.398 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:19.398 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:19.399 [2024-07-15 23:32:08.121258] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:19.399 [2024-07-15 23:32:08.121295] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840278 ] 00:06:19.399 [2024-07-15 23:32:08.173865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.399 [2024-07-15 23:32:08.247436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.399 23:32:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.779 00:06:20.779 real 0m1.315s 00:06:20.779 user 0m1.212s 00:06:20.779 sys 0m0.108s 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:20.779 23:32:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:20.779 ************************************ 00:06:20.779 END TEST accel_copy_crc32c_C2 00:06:20.780 ************************************ 00:06:20.780 23:32:09 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:20.780 23:32:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:20.780 23:32:09 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:20.780 23:32:09 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:20.780 23:32:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.780 ************************************ 00:06:20.780 START TEST accel_dualcast 00:06:20.780 ************************************ 00:06:20.780 23:32:09 accel.accel_dualcast -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dualcast -y 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:20.780 [2024-07-15 23:32:09.496308] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:20.780 [2024-07-15 23:32:09.496363] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840532 ] 00:06:20.780 [2024-07-15 23:32:09.552122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.780 [2024-07-15 23:32:09.624652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:20.780 23:32:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:22.159 23:32:10 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.159 00:06:22.159 real 0m1.332s 00:06:22.159 user 0m1.220s 00:06:22.159 sys 0m0.115s 00:06:22.159 23:32:10 accel.accel_dualcast -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:22.159 23:32:10 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:22.159 ************************************ 00:06:22.159 END TEST accel_dualcast 00:06:22.159 ************************************ 00:06:22.159 23:32:10 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:22.159 23:32:10 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:22.159 23:32:10 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:22.159 23:32:10 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:22.159 23:32:10 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.159 ************************************ 00:06:22.159 START TEST accel_compare 00:06:22.159 ************************************ 00:06:22.159 23:32:10 accel.accel_compare -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compare -y 00:06:22.159 23:32:10 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:22.159 23:32:10 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:22.159 23:32:10 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.159 23:32:10 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:22.160 23:32:10 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:22.160 [2024-07-15 23:32:10.885574] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:22.160 [2024-07-15 23:32:10.885622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid840781 ] 00:06:22.160 [2024-07-15 23:32:10.939612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.160 [2024-07-15 23:32:11.011396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.160 23:32:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:23.574 23:32:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.574 00:06:23.574 real 0m1.327s 00:06:23.574 user 0m1.227s 00:06:23.574 sys 0m0.104s 00:06:23.574 23:32:12 accel.accel_compare -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:23.574 23:32:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:23.574 ************************************ 00:06:23.574 END TEST accel_compare 00:06:23.574 ************************************ 00:06:23.574 23:32:12 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:23.574 23:32:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:23.574 23:32:12 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:23.574 23:32:12 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:23.574 23:32:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.574 ************************************ 00:06:23.574 START TEST accel_xor 00:06:23.574 ************************************ 00:06:23.574 23:32:12 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:23.574 [2024-07-15 23:32:12.269583] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:23.574 [2024-07-15 23:32:12.269649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841034 ] 00:06:23.574 [2024-07-15 23:32:12.323540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.574 [2024-07-15 23:32:12.396793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.574 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:23.575 23:32:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:24.950 23:32:13 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.950 00:06:24.950 real 0m1.329s 00:06:24.950 user 0m1.220s 00:06:24.950 sys 0m0.113s 00:06:24.950 23:32:13 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:24.950 23:32:13 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:24.950 ************************************ 00:06:24.950 END TEST accel_xor 00:06:24.950 ************************************ 00:06:24.951 23:32:13 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:24.951 23:32:13 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:24.951 23:32:13 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:24.951 23:32:13 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:24.951 23:32:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.951 ************************************ 00:06:24.951 START TEST accel_xor 00:06:24.951 ************************************ 00:06:24.951 23:32:13 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y -x 3 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:24.951 [2024-07-15 23:32:13.654780] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:24.951 [2024-07-15 23:32:13.654836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841289 ] 00:06:24.951 [2024-07-15 23:32:13.708997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.951 [2024-07-15 23:32:13.781310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.951 23:32:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:26.330 23:32:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.330 00:06:26.330 real 0m1.330s 00:06:26.330 user 0m1.221s 00:06:26.330 sys 0m0.114s 00:06:26.330 23:32:14 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:26.330 23:32:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:26.330 ************************************ 00:06:26.330 END TEST accel_xor 00:06:26.330 ************************************ 00:06:26.330 23:32:14 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:26.330 23:32:14 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:26.330 23:32:14 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:06:26.330 23:32:14 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:26.330 23:32:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.330 ************************************ 00:06:26.330 START TEST accel_dif_verify 00:06:26.330 ************************************ 00:06:26.330 23:32:15 accel.accel_dif_verify -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_verify 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:26.330 [2024-07-15 23:32:15.039774] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:26.330 [2024-07-15 23:32:15.039826] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841536 ] 00:06:26.330 [2024-07-15 23:32:15.093725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.330 [2024-07-15 23:32:15.165690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.330 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:26.331 23:32:15 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:27.730 23:32:16 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.730 00:06:27.730 real 0m1.327s 00:06:27.730 user 0m1.217s 00:06:27.730 sys 0m0.115s 00:06:27.730 23:32:16 accel.accel_dif_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:27.730 23:32:16 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:27.730 ************************************ 00:06:27.730 END TEST accel_dif_verify 00:06:27.730 ************************************ 00:06:27.731 23:32:16 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:27.731 23:32:16 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:27.731 23:32:16 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:06:27.731 23:32:16 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:27.731 23:32:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.731 ************************************ 00:06:27.731 START TEST accel_dif_generate 00:06:27.731 ************************************ 00:06:27.731 23:32:16 accel.accel_dif_generate -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:27.731 [2024-07-15 23:32:16.422511] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:27.731 [2024-07-15 23:32:16.422559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid841781 ] 00:06:27.731 [2024-07-15 23:32:16.476642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.731 [2024-07-15 23:32:16.549022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:27.731 23:32:16 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:29.114 23:32:17 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.114 00:06:29.114 real 0m1.328s 00:06:29.114 user 0m1.225s 00:06:29.114 sys 0m0.109s 00:06:29.114 23:32:17 accel.accel_dif_generate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:29.114 23:32:17 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:29.114 ************************************ 00:06:29.114 END TEST accel_dif_generate 00:06:29.114 ************************************ 00:06:29.114 23:32:17 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:29.114 23:32:17 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:29.114 23:32:17 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:06:29.114 23:32:17 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:29.114 23:32:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.114 ************************************ 00:06:29.114 START TEST accel_dif_generate_copy 00:06:29.114 ************************************ 00:06:29.114 23:32:17 accel.accel_dif_generate_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate_copy 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:29.115 [2024-07-15 23:32:17.805862] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:29.115 [2024-07-15 23:32:17.805908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842036 ] 00:06:29.115 [2024-07-15 23:32:17.859927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.115 [2024-07-15 23:32:17.932455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.115 23:32:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.494 00:06:30.494 real 0m1.328s 00:06:30.494 user 0m1.223s 00:06:30.494 sys 0m0.110s 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:30.494 23:32:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:30.494 ************************************ 00:06:30.494 END TEST accel_dif_generate_copy 00:06:30.494 ************************************ 00:06:30.494 23:32:19 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:30.494 23:32:19 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:30.495 23:32:19 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.495 23:32:19 accel -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:30.495 23:32:19 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:30.495 23:32:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.495 ************************************ 00:06:30.495 START TEST accel_comp 00:06:30.495 ************************************ 00:06:30.495 23:32:19 accel.accel_comp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:30.495 [2024-07-15 23:32:19.178918] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:30.495 [2024-07-15 23:32:19.178984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842282 ] 00:06:30.495 [2024-07-15 23:32:19.233677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.495 [2024-07-15 23:32:19.305893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:30.495 23:32:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:31.873 23:32:20 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.873 00:06:31.873 real 0m1.333s 00:06:31.873 user 0m1.223s 00:06:31.873 sys 0m0.115s 00:06:31.873 23:32:20 accel.accel_comp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:31.873 23:32:20 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:31.873 ************************************ 00:06:31.873 END TEST accel_comp 00:06:31.873 ************************************ 00:06:31.873 23:32:20 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:31.873 23:32:20 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.873 23:32:20 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:31.873 23:32:20 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:31.873 23:32:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.873 ************************************ 00:06:31.873 START TEST accel_decomp 00:06:31.873 ************************************ 00:06:31.873 23:32:20 accel.accel_decomp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:31.873 [2024-07-15 23:32:20.564950] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:31.873 [2024-07-15 23:32:20.564998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842535 ] 00:06:31.873 [2024-07-15 23:32:20.619140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.873 [2024-07-15 23:32:20.691144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.873 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:31.874 23:32:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:33.250 23:32:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.250 00:06:33.250 real 0m1.330s 00:06:33.250 user 0m1.222s 00:06:33.250 sys 0m0.114s 00:06:33.250 23:32:21 accel.accel_decomp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:33.250 23:32:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:33.250 ************************************ 00:06:33.250 END TEST accel_decomp 00:06:33.250 ************************************ 00:06:33.250 23:32:21 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:33.250 23:32:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:33.250 23:32:21 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:06:33.250 23:32:21 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:33.250 23:32:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.250 ************************************ 00:06:33.250 START TEST accel_decomp_full 00:06:33.250 ************************************ 00:06:33.250 23:32:21 accel.accel_decomp_full -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:33.250 23:32:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:33.250 [2024-07-15 23:32:21.952037] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:33.250 [2024-07-15 23:32:21.952093] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842780 ] 00:06:33.250 [2024-07-15 23:32:22.007670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.250 [2024-07-15 23:32:22.079983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.250 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:33.251 23:32:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.628 23:32:23 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.628 00:06:34.628 real 0m1.337s 00:06:34.628 user 0m1.232s 00:06:34.628 sys 0m0.110s 00:06:34.628 23:32:23 accel.accel_decomp_full -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:34.628 23:32:23 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:34.628 ************************************ 00:06:34.628 END TEST accel_decomp_full 00:06:34.628 ************************************ 00:06:34.628 23:32:23 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:34.628 23:32:23 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.628 23:32:23 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:06:34.628 23:32:23 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:34.628 23:32:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.628 ************************************ 00:06:34.628 START TEST accel_decomp_mcore 00:06:34.628 ************************************ 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:34.628 [2024-07-15 23:32:23.342891] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:34.628 [2024-07-15 23:32:23.342938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843032 ] 00:06:34.628 [2024-07-15 23:32:23.396536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.628 [2024-07-15 23:32:23.470623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.628 [2024-07-15 23:32:23.470715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.628 [2024-07-15 23:32:23.470801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.628 [2024-07-15 23:32:23.470802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:34.628 23:32:23 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.005 00:06:36.005 real 0m1.342s 00:06:36.005 user 0m4.560s 00:06:36.005 sys 0m0.121s 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:36.005 23:32:24 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:36.005 ************************************ 00:06:36.006 END TEST accel_decomp_mcore 00:06:36.006 ************************************ 00:06:36.006 23:32:24 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:36.006 23:32:24 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:36.006 23:32:24 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:36.006 23:32:24 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:36.006 23:32:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.006 ************************************ 00:06:36.006 START TEST accel_decomp_full_mcore 00:06:36.006 ************************************ 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:36.006 [2024-07-15 23:32:24.751384] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:36.006 [2024-07-15 23:32:24.751440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843288 ] 00:06:36.006 [2024-07-15 23:32:24.806772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.006 [2024-07-15 23:32:24.881498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.006 [2024-07-15 23:32:24.881594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.006 [2024-07-15 23:32:24.881680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.006 [2024-07-15 23:32:24.881682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 23:32:24 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.386 00:06:37.386 real 0m1.355s 00:06:37.386 user 0m4.589s 00:06:37.386 sys 0m0.129s 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:37.386 23:32:26 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ************************************ 00:06:37.386 END TEST accel_decomp_full_mcore 00:06:37.386 ************************************ 00:06:37.386 23:32:26 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:37.386 23:32:26 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.386 23:32:26 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:06:37.386 23:32:26 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:37.386 23:32:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ************************************ 00:06:37.386 START TEST accel_decomp_mthread 00:06:37.386 ************************************ 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:37.386 [2024-07-15 23:32:26.174135] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:37.386 [2024-07-15 23:32:26.174205] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843537 ] 00:06:37.386 [2024-07-15 23:32:26.228763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.386 [2024-07-15 23:32:26.301880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:37.386 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:37.646 23:32:26 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.584 00:06:38.584 real 0m1.343s 00:06:38.584 user 0m1.241s 00:06:38.584 sys 0m0.116s 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:38.584 23:32:27 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:38.584 ************************************ 00:06:38.584 END TEST accel_decomp_mthread 00:06:38.584 ************************************ 00:06:38.584 23:32:27 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:38.584 23:32:27 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.584 23:32:27 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:06:38.584 23:32:27 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:38.584 23:32:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.584 ************************************ 00:06:38.584 START TEST accel_decomp_full_mthread 00:06:38.584 ************************************ 00:06:38.584 23:32:27 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.584 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:38.844 [2024-07-15 23:32:27.581446] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:38.844 [2024-07-15 23:32:27.581498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843786 ] 00:06:38.844 [2024-07-15 23:32:27.635371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.844 [2024-07-15 23:32:27.707703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.844 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.845 23:32:27 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.224 00:06:40.224 real 0m1.358s 00:06:40.224 user 0m1.253s 00:06:40.224 sys 0m0.118s 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:40.224 23:32:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:40.224 ************************************ 00:06:40.224 END TEST accel_decomp_full_mthread 00:06:40.224 ************************************ 00:06:40.224 23:32:28 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:40.224 23:32:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:40.224 23:32:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.224 23:32:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:40.224 23:32:28 accel -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:40.224 23:32:28 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:40.224 23:32:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.224 23:32:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.224 23:32:28 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.224 23:32:28 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.224 23:32:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.224 23:32:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.224 23:32:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:40.224 23:32:28 accel -- accel/accel.sh@41 -- # jq -r . 00:06:40.224 ************************************ 00:06:40.224 START TEST accel_dif_functional_tests 00:06:40.224 ************************************ 00:06:40.224 23:32:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:40.224 [2024-07-15 23:32:29.025480] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:40.224 [2024-07-15 23:32:29.025514] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844039 ] 00:06:40.224 [2024-07-15 23:32:29.077630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.224 [2024-07-15 23:32:29.151405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.224 [2024-07-15 23:32:29.151504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.224 [2024-07-15 23:32:29.151505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.509 00:06:40.509 00:06:40.509 CUnit - A unit testing framework for C - Version 2.1-3 00:06:40.509 http://cunit.sourceforge.net/ 00:06:40.510 00:06:40.510 00:06:40.510 Suite: accel_dif 00:06:40.510 Test: verify: DIF generated, GUARD check ...passed 00:06:40.510 Test: verify: DIF generated, APPTAG check ...passed 00:06:40.510 Test: verify: DIF generated, REFTAG check ...passed 00:06:40.510 Test: verify: DIF not generated, GUARD check ...[2024-07-15 23:32:29.219801] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.510 passed 00:06:40.510 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 23:32:29.219845] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.510 passed 00:06:40.510 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 23:32:29.219864] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.510 passed 00:06:40.510 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:40.510 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 23:32:29.219906] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:40.510 passed 00:06:40.510 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:40.510 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:40.510 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:40.510 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 23:32:29.220002] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:40.510 passed 00:06:40.510 Test: verify copy: DIF generated, GUARD check ...passed 00:06:40.510 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:40.510 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:40.510 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 23:32:29.220109] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:40.510 passed 00:06:40.510 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 23:32:29.220129] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:40.510 passed 00:06:40.510 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 23:32:29.220148] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:40.510 passed 00:06:40.510 Test: generate copy: DIF generated, GUARD check ...passed 00:06:40.510 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:40.510 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:40.510 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:40.510 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:40.510 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:40.510 Test: generate copy: iovecs-len validate ...[2024-07-15 23:32:29.220315] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:40.510 passed 00:06:40.510 Test: generate copy: buffer alignment validate ...passed 00:06:40.510 00:06:40.510 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.510 suites 1 1 n/a 0 0 00:06:40.510 tests 26 26 26 0 0 00:06:40.510 asserts 115 115 115 0 n/a 00:06:40.510 00:06:40.510 Elapsed time = 0.002 seconds 00:06:40.510 00:06:40.510 real 0m0.407s 00:06:40.510 user 0m0.616s 00:06:40.510 sys 0m0.148s 00:06:40.510 23:32:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:40.510 23:32:29 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:40.510 ************************************ 00:06:40.510 END TEST accel_dif_functional_tests 00:06:40.510 ************************************ 00:06:40.510 23:32:29 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:40.510 00:06:40.510 real 0m30.574s 00:06:40.510 user 0m34.394s 00:06:40.510 sys 0m4.056s 00:06:40.510 23:32:29 accel -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:40.510 23:32:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.510 ************************************ 00:06:40.510 END TEST accel 00:06:40.510 ************************************ 00:06:40.510 23:32:29 -- common/autotest_common.sh@1136 -- # return 0 00:06:40.510 23:32:29 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.510 23:32:29 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:40.510 23:32:29 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:40.510 23:32:29 -- common/autotest_common.sh@10 -- # set +x 00:06:40.769 ************************************ 00:06:40.769 START TEST accel_rpc 00:06:40.769 ************************************ 00:06:40.769 23:32:29 accel_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:40.769 * Looking for test storage... 00:06:40.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:40.769 23:32:29 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:40.769 23:32:29 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=844253 00:06:40.769 23:32:29 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 844253 00:06:40.769 23:32:29 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:40.769 23:32:29 accel_rpc -- common/autotest_common.sh@823 -- # '[' -z 844253 ']' 00:06:40.769 23:32:29 accel_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.769 23:32:29 accel_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:40.769 23:32:29 accel_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.769 23:32:29 accel_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:40.769 23:32:29 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.770 [2024-07-15 23:32:29.601654] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:40.770 [2024-07-15 23:32:29.601701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844253 ] 00:06:40.770 [2024-07-15 23:32:29.656173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.770 [2024-07-15 23:32:29.737342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.709 23:32:30 accel_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:41.709 23:32:30 accel_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:41.709 23:32:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:41.709 23:32:30 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:41.709 23:32:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:41.709 23:32:30 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:41.709 23:32:30 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:41.709 23:32:30 accel_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:41.709 23:32:30 accel_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:41.709 23:32:30 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.709 ************************************ 00:06:41.709 START TEST accel_assign_opcode 00:06:41.709 ************************************ 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1117 -- # accel_assign_opcode_test_suite 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.709 [2024-07-15 23:32:30.427369] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.709 [2024-07-15 23:32:30.435380] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:41.709 software 00:06:41.709 00:06:41.709 real 0m0.235s 00:06:41.709 user 0m0.040s 00:06:41.709 sys 0m0.009s 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:41.709 23:32:30 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:41.709 ************************************ 00:06:41.709 END TEST accel_assign_opcode 00:06:41.709 ************************************ 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@1136 -- # return 0 00:06:41.969 23:32:30 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 844253 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@942 -- # '[' -z 844253 ']' 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@946 -- # kill -0 844253 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@947 -- # uname 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 844253 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 844253' 00:06:41.969 killing process with pid 844253 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@961 -- # kill 844253 00:06:41.969 23:32:30 accel_rpc -- common/autotest_common.sh@966 -- # wait 844253 00:06:42.227 00:06:42.227 real 0m1.547s 00:06:42.227 user 0m1.614s 00:06:42.227 sys 0m0.400s 00:06:42.227 23:32:31 accel_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:42.227 23:32:31 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.227 ************************************ 00:06:42.227 END TEST accel_rpc 00:06:42.227 ************************************ 00:06:42.227 23:32:31 -- common/autotest_common.sh@1136 -- # return 0 00:06:42.227 23:32:31 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.227 23:32:31 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:42.227 23:32:31 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:42.227 23:32:31 -- common/autotest_common.sh@10 -- # set +x 00:06:42.227 ************************************ 00:06:42.227 START TEST app_cmdline 00:06:42.227 ************************************ 00:06:42.227 23:32:31 app_cmdline -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:42.227 * Looking for test storage... 00:06:42.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:42.227 23:32:31 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:42.227 23:32:31 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=844630 00:06:42.227 23:32:31 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:42.227 23:32:31 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 844630 00:06:42.227 23:32:31 app_cmdline -- common/autotest_common.sh@823 -- # '[' -z 844630 ']' 00:06:42.227 23:32:31 app_cmdline -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.227 23:32:31 app_cmdline -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:42.227 23:32:31 app_cmdline -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.227 23:32:31 app_cmdline -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:42.227 23:32:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:42.486 [2024-07-15 23:32:31.232017] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:42.486 [2024-07-15 23:32:31.232061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844630 ] 00:06:42.486 [2024-07-15 23:32:31.285795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.486 [2024-07-15 23:32:31.359759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@856 -- # return 0 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:43.423 { 00:06:43.423 "version": "SPDK v24.09-pre git sha1 00bf4c571", 00:06:43.423 "fields": { 00:06:43.423 "major": 24, 00:06:43.423 "minor": 9, 00:06:43.423 "patch": 0, 00:06:43.423 "suffix": "-pre", 00:06:43.423 "commit": "00bf4c571" 00:06:43.423 } 00:06:43.423 } 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:43.423 23:32:32 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@642 -- # local es=0 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:43.423 23:32:32 app_cmdline -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:43.682 request: 00:06:43.682 { 00:06:43.682 "method": "env_dpdk_get_mem_stats", 00:06:43.682 "req_id": 1 00:06:43.682 } 00:06:43.682 Got JSON-RPC error response 00:06:43.682 response: 00:06:43.682 { 00:06:43.682 "code": -32601, 00:06:43.682 "message": "Method not found" 00:06:43.682 } 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@645 -- # es=1 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:43.682 23:32:32 app_cmdline -- app/cmdline.sh@1 -- # killprocess 844630 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@942 -- # '[' -z 844630 ']' 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@946 -- # kill -0 844630 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@947 -- # uname 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 844630 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@960 -- # echo 'killing process with pid 844630' 00:06:43.682 killing process with pid 844630 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@961 -- # kill 844630 00:06:43.682 23:32:32 app_cmdline -- common/autotest_common.sh@966 -- # wait 844630 00:06:43.941 00:06:43.941 real 0m1.684s 00:06:43.941 user 0m2.022s 00:06:43.941 sys 0m0.429s 00:06:43.941 23:32:32 app_cmdline -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:43.941 23:32:32 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.941 ************************************ 00:06:43.941 END TEST app_cmdline 00:06:43.941 ************************************ 00:06:43.941 23:32:32 -- common/autotest_common.sh@1136 -- # return 0 00:06:43.941 23:32:32 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:43.941 23:32:32 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:43.941 23:32:32 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:43.941 23:32:32 -- common/autotest_common.sh@10 -- # set +x 00:06:43.941 ************************************ 00:06:43.941 START TEST version 00:06:43.941 ************************************ 00:06:43.941 23:32:32 version -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:44.201 * Looking for test storage... 00:06:44.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.201 23:32:32 version -- app/version.sh@17 -- # get_header_version major 00:06:44.201 23:32:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # cut -f2 00:06:44.201 23:32:32 version -- app/version.sh@17 -- # major=24 00:06:44.201 23:32:32 version -- app/version.sh@18 -- # get_header_version minor 00:06:44.201 23:32:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # cut -f2 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.201 23:32:32 version -- app/version.sh@18 -- # minor=9 00:06:44.201 23:32:32 version -- app/version.sh@19 -- # get_header_version patch 00:06:44.201 23:32:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # cut -f2 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.201 23:32:32 version -- app/version.sh@19 -- # patch=0 00:06:44.201 23:32:32 version -- app/version.sh@20 -- # get_header_version suffix 00:06:44.201 23:32:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # cut -f2 00:06:44.201 23:32:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:44.201 23:32:32 version -- app/version.sh@20 -- # suffix=-pre 00:06:44.201 23:32:32 version -- app/version.sh@22 -- # version=24.9 00:06:44.201 23:32:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:44.201 23:32:32 version -- app/version.sh@28 -- # version=24.9rc0 00:06:44.201 23:32:32 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:44.201 23:32:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:44.201 23:32:33 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:44.201 23:32:33 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:44.201 00:06:44.201 real 0m0.156s 00:06:44.201 user 0m0.083s 00:06:44.201 sys 0m0.108s 00:06:44.201 23:32:33 version -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:44.201 23:32:33 version -- common/autotest_common.sh@10 -- # set +x 00:06:44.201 ************************************ 00:06:44.201 END TEST version 00:06:44.201 ************************************ 00:06:44.201 23:32:33 -- common/autotest_common.sh@1136 -- # return 0 00:06:44.201 23:32:33 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@198 -- # uname -s 00:06:44.201 23:32:33 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:44.201 23:32:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:44.201 23:32:33 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:44.201 23:32:33 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:44.201 23:32:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:44.201 23:32:33 -- common/autotest_common.sh@10 -- # set +x 00:06:44.201 23:32:33 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:44.201 23:32:33 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:44.201 23:32:33 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.201 23:32:33 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:44.201 23:32:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:44.201 23:32:33 -- common/autotest_common.sh@10 -- # set +x 00:06:44.201 ************************************ 00:06:44.201 START TEST nvmf_tcp 00:06:44.201 ************************************ 00:06:44.201 23:32:33 nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:44.460 * Looking for test storage... 00:06:44.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.460 23:32:33 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.461 23:32:33 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.461 23:32:33 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.461 23:32:33 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.461 23:32:33 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:44.461 23:32:33 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:44.461 23:32:33 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:44.461 23:32:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:44.461 23:32:33 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.461 23:32:33 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:44.461 23:32:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:44.461 23:32:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:44.461 ************************************ 00:06:44.461 START TEST nvmf_example 00:06:44.461 ************************************ 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:44.461 * Looking for test storage... 00:06:44.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:44.461 23:32:33 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:49.727 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:49.727 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:49.727 Found net devices under 0000:86:00.0: cvl_0_0 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:49.727 Found net devices under 0000:86:00.1: cvl_0_1 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.727 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.728 23:32:37 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:49.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:49.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:06:49.728 00:06:49.728 --- 10.0.0.2 ping statistics --- 00:06:49.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.728 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:49.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:49.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:06:49.728 00:06:49.728 --- 10.0.0.1 ping statistics --- 00:06:49.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:49.728 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=848017 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 848017 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@823 -- # '[' -z 848017 ']' 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:49.728 23:32:38 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # return 0 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:50.294 23:32:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:02.512 Initializing NVMe Controllers 00:07:02.512 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.512 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:02.512 Initialization complete. Launching workers. 00:07:02.512 ======================================================== 00:07:02.512 Latency(us) 00:07:02.512 Device Information : IOPS MiB/s Average min max 00:07:02.512 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18047.25 70.50 3546.65 706.91 16256.31 00:07:02.512 ======================================================== 00:07:02.512 Total : 18047.25 70.50 3546.65 706.91 16256.31 00:07:02.512 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.512 rmmod nvme_tcp 00:07:02.512 rmmod nvme_fabrics 00:07:02.512 rmmod nvme_keyring 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 848017 ']' 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 848017 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@942 -- # '[' -z 848017 ']' 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # kill -0 848017 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # uname 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 848017 00:07:02.512 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # process_name=nvmf 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' nvmf = sudo ']' 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@960 -- # echo 'killing process with pid 848017' 00:07:02.513 killing process with pid 848017 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@961 -- # kill 848017 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # wait 848017 00:07:02.513 nvmf threads initialize successfully 00:07:02.513 bdev subsystem init successfully 00:07:02.513 created a nvmf target service 00:07:02.513 create targets's poll groups done 00:07:02.513 all subsystems of target started 00:07:02.513 nvmf target is running 00:07:02.513 all subsystems of target stopped 00:07:02.513 destroy targets's poll groups done 00:07:02.513 destroyed the nvmf target service 00:07:02.513 bdev subsystem finish successfully 00:07:02.513 nvmf threads destroy successfully 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.513 23:32:49 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.082 23:32:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:03.082 23:32:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:03.082 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:03.082 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:03.082 00:07:03.082 real 0m18.543s 00:07:03.082 user 0m45.632s 00:07:03.082 sys 0m4.987s 00:07:03.082 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:03.082 23:32:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:03.082 ************************************ 00:07:03.082 END TEST nvmf_example 00:07:03.082 ************************************ 00:07:03.082 23:32:51 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:03.082 23:32:51 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:03.082 23:32:51 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:03.082 23:32:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:03.082 23:32:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.082 ************************************ 00:07:03.082 START TEST nvmf_filesystem 00:07:03.082 ************************************ 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:03.082 * Looking for test storage... 00:07:03.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:03.082 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:03.083 #define SPDK_CONFIG_H 00:07:03.083 #define SPDK_CONFIG_APPS 1 00:07:03.083 #define SPDK_CONFIG_ARCH native 00:07:03.083 #undef SPDK_CONFIG_ASAN 00:07:03.083 #undef SPDK_CONFIG_AVAHI 00:07:03.083 #undef SPDK_CONFIG_CET 00:07:03.083 #define SPDK_CONFIG_COVERAGE 1 00:07:03.083 #define SPDK_CONFIG_CROSS_PREFIX 00:07:03.083 #undef SPDK_CONFIG_CRYPTO 00:07:03.083 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:03.083 #undef SPDK_CONFIG_CUSTOMOCF 00:07:03.083 #undef SPDK_CONFIG_DAOS 00:07:03.083 #define SPDK_CONFIG_DAOS_DIR 00:07:03.083 #define SPDK_CONFIG_DEBUG 1 00:07:03.083 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:03.083 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:03.083 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:03.083 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:03.083 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:03.083 #undef SPDK_CONFIG_DPDK_UADK 00:07:03.083 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:03.083 #define SPDK_CONFIG_EXAMPLES 1 00:07:03.083 #undef SPDK_CONFIG_FC 00:07:03.083 #define SPDK_CONFIG_FC_PATH 00:07:03.083 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:03.083 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:03.083 #undef SPDK_CONFIG_FUSE 00:07:03.083 #undef SPDK_CONFIG_FUZZER 00:07:03.083 #define SPDK_CONFIG_FUZZER_LIB 00:07:03.083 #undef SPDK_CONFIG_GOLANG 00:07:03.083 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:03.083 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:03.083 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:03.083 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:03.083 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:03.083 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:03.083 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:03.083 #define SPDK_CONFIG_IDXD 1 00:07:03.083 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:03.083 #undef SPDK_CONFIG_IPSEC_MB 00:07:03.083 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:03.083 #define SPDK_CONFIG_ISAL 1 00:07:03.083 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:03.083 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:03.083 #define SPDK_CONFIG_LIBDIR 00:07:03.083 #undef SPDK_CONFIG_LTO 00:07:03.083 #define SPDK_CONFIG_MAX_LCORES 128 00:07:03.083 #define SPDK_CONFIG_NVME_CUSE 1 00:07:03.083 #undef SPDK_CONFIG_OCF 00:07:03.083 #define SPDK_CONFIG_OCF_PATH 00:07:03.083 #define SPDK_CONFIG_OPENSSL_PATH 00:07:03.083 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:03.083 #define SPDK_CONFIG_PGO_DIR 00:07:03.083 #undef SPDK_CONFIG_PGO_USE 00:07:03.083 #define SPDK_CONFIG_PREFIX /usr/local 00:07:03.083 #undef SPDK_CONFIG_RAID5F 00:07:03.083 #undef SPDK_CONFIG_RBD 00:07:03.083 #define SPDK_CONFIG_RDMA 1 00:07:03.083 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:03.083 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:03.083 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:03.083 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:03.083 #define SPDK_CONFIG_SHARED 1 00:07:03.083 #undef SPDK_CONFIG_SMA 00:07:03.083 #define SPDK_CONFIG_TESTS 1 00:07:03.083 #undef SPDK_CONFIG_TSAN 00:07:03.083 #define SPDK_CONFIG_UBLK 1 00:07:03.083 #define SPDK_CONFIG_UBSAN 1 00:07:03.083 #undef SPDK_CONFIG_UNIT_TESTS 00:07:03.083 #undef SPDK_CONFIG_URING 00:07:03.083 #define SPDK_CONFIG_URING_PATH 00:07:03.083 #undef SPDK_CONFIG_URING_ZNS 00:07:03.083 #undef SPDK_CONFIG_USDT 00:07:03.083 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:03.083 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:03.083 #define SPDK_CONFIG_VFIO_USER 1 00:07:03.083 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:03.083 #define SPDK_CONFIG_VHOST 1 00:07:03.083 #define SPDK_CONFIG_VIRTIO 1 00:07:03.083 #undef SPDK_CONFIG_VTUNE 00:07:03.083 #define SPDK_CONFIG_VTUNE_DIR 00:07:03.083 #define SPDK_CONFIG_WERROR 1 00:07:03.083 #define SPDK_CONFIG_WPDK_DIR 00:07:03.083 #undef SPDK_CONFIG_XNVME 00:07:03.083 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:03.083 23:32:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:03.083 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.084 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:03.085 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@273 -- # MAKE=make 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@274 -- # MAKEFLAGS=-j96 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # export HUGEMEM=4096 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@290 -- # HUGEMEM=4096 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@292 -- # NO_HUGE=() 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@293 -- # TEST_MODE= 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@294 -- # for i in "$@" 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # case "$i" in 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # TEST_TRANSPORT=tcp 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # [[ -z 850429 ]] 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@312 -- # kill -0 850429 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@322 -- # [[ -v testdir ]] 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@324 -- # local requested_size=2147483648 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@325 -- # local mount target_dir 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # local -A mounts fss sizes avails uses 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # local source fs size avail mount use 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local storage_fallback storage_candidates 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # mktemp -udt spdk.XXXXXX 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # storage_fallback=/tmp/spdk.HjSKIY 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HjSKIY/tests/target /tmp/spdk.HjSKIY 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@352 -- # requested_size=2214592512 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # df -T 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@321 -- # grep -v Filesystem 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_devtmpfs 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=devtmpfs 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=67108864 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=67108864 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=0 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=/dev/pmem0 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=ext2 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=950202368 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=5284429824 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4334227456 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_root 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=overlay 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=189577441280 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=195974299648 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=6396858368 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:03.345 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=97983774720 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=97987149824 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=3375104 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=39185485824 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=39194861568 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=9375744 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=97986236416 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=97987149824 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=913408 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=19597422592 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=19597426688 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4096 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # printf '* Looking for test storage...\n' 00:07:03.346 * Looking for test storage... 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # local target_space new_size 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # for target_dir in "${storage_candidates[@]}" 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # mount=/ 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # target_space=189577441280 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # (( target_space == 0 || target_space < requested_size )) 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # (( target_space >= requested_size )) 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == tmpfs ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == ramfs ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ / == / ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # new_size=8611450880 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@376 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@383 -- # return 0 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.346 23:32:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:08.619 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:08.619 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:08.619 Found net devices under 0000:86:00.0: cvl_0_0 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:08.619 Found net devices under 0000:86:00.1: cvl_0_1 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.619 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:08.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:08.620 00:07:08.620 --- 10.0.0.2 ping statistics --- 00:07:08.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.620 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:07:08.620 00:07:08.620 --- 10.0.0.1 ping statistics --- 00:07:08.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.620 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.620 ************************************ 00:07:08.620 START TEST nvmf_filesystem_no_in_capsule 00:07:08.620 ************************************ 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 0 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=853454 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 853454 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 853454 ']' 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:08.620 23:32:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:08.620 [2024-07-15 23:32:57.544177] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:08.620 [2024-07-15 23:32:57.544223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.879 [2024-07-15 23:32:57.604440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:08.879 [2024-07-15 23:32:57.688764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.879 [2024-07-15 23:32:57.688808] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.879 [2024-07-15 23:32:57.688815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.879 [2024-07-15 23:32:57.688821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.879 [2024-07-15 23:32:57.688827] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.879 [2024-07-15 23:32:57.688867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.879 [2024-07-15 23:32:57.688962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.879 [2024-07-15 23:32:57.689049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.879 [2024-07-15 23:32:57.689050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.520 [2024-07-15 23:32:58.402274] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:09.520 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.780 Malloc1 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.781 [2024-07-15 23:32:58.551193] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:07:09.781 { 00:07:09.781 "name": "Malloc1", 00:07:09.781 "aliases": [ 00:07:09.781 "7af88393-f2ed-496e-9f18-ba98d9a0da21" 00:07:09.781 ], 00:07:09.781 "product_name": "Malloc disk", 00:07:09.781 "block_size": 512, 00:07:09.781 "num_blocks": 1048576, 00:07:09.781 "uuid": "7af88393-f2ed-496e-9f18-ba98d9a0da21", 00:07:09.781 "assigned_rate_limits": { 00:07:09.781 "rw_ios_per_sec": 0, 00:07:09.781 "rw_mbytes_per_sec": 0, 00:07:09.781 "r_mbytes_per_sec": 0, 00:07:09.781 "w_mbytes_per_sec": 0 00:07:09.781 }, 00:07:09.781 "claimed": true, 00:07:09.781 "claim_type": "exclusive_write", 00:07:09.781 "zoned": false, 00:07:09.781 "supported_io_types": { 00:07:09.781 "read": true, 00:07:09.781 "write": true, 00:07:09.781 "unmap": true, 00:07:09.781 "flush": true, 00:07:09.781 "reset": true, 00:07:09.781 "nvme_admin": false, 00:07:09.781 "nvme_io": false, 00:07:09.781 "nvme_io_md": false, 00:07:09.781 "write_zeroes": true, 00:07:09.781 "zcopy": true, 00:07:09.781 "get_zone_info": false, 00:07:09.781 "zone_management": false, 00:07:09.781 "zone_append": false, 00:07:09.781 "compare": false, 00:07:09.781 "compare_and_write": false, 00:07:09.781 "abort": true, 00:07:09.781 "seek_hole": false, 00:07:09.781 "seek_data": false, 00:07:09.781 "copy": true, 00:07:09.781 "nvme_iov_md": false 00:07:09.781 }, 00:07:09.781 "memory_domains": [ 00:07:09.781 { 00:07:09.781 "dma_device_id": "system", 00:07:09.781 "dma_device_type": 1 00:07:09.781 }, 00:07:09.781 { 00:07:09.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.781 "dma_device_type": 2 00:07:09.781 } 00:07:09.781 ], 00:07:09.781 "driver_specific": {} 00:07:09.781 } 00:07:09.781 ]' 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:09.781 23:32:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:11.173 23:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:11.173 23:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:07:11.173 23:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:11.173 23:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:11.173 23:32:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:13.071 23:33:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:13.329 23:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:13.588 23:33:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:14.964 ************************************ 00:07:14.964 START TEST filesystem_ext4 00:07:14.964 ************************************ 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@921 -- # local force 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:07:14.964 23:33:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:14.964 mke2fs 1.46.5 (30-Dec-2021) 00:07:14.964 Discarding device blocks: 0/522240 done 00:07:14.964 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:14.964 Filesystem UUID: 756bc62b-1c29-4226-9f93-31e78a4665ca 00:07:14.964 Superblock backups stored on blocks: 00:07:14.964 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:14.964 00:07:14.964 Allocating group tables: 0/64 done 00:07:14.964 Writing inode tables: 0/64 done 00:07:15.532 Creating journal (8192 blocks): done 00:07:15.532 Writing superblocks and filesystem accounting information: 0/64 done 00:07:15.532 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # return 0 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 853454 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:15.532 00:07:15.532 real 0m0.900s 00:07:15.532 user 0m0.023s 00:07:15.532 sys 0m0.066s 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:15.532 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:15.532 ************************************ 00:07:15.532 END TEST filesystem_ext4 00:07:15.532 ************************************ 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.791 ************************************ 00:07:15.791 START TEST filesystem_btrfs 00:07:15.791 ************************************ 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@921 -- # local force 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:07:15.791 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:16.050 btrfs-progs v6.6.2 00:07:16.050 See https://btrfs.readthedocs.io for more information. 00:07:16.050 00:07:16.050 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:16.050 NOTE: several default settings have changed in version 5.15, please make sure 00:07:16.050 this does not affect your deployments: 00:07:16.050 - DUP for metadata (-m dup) 00:07:16.050 - enabled no-holes (-O no-holes) 00:07:16.050 - enabled free-space-tree (-R free-space-tree) 00:07:16.050 00:07:16.050 Label: (null) 00:07:16.050 UUID: 2ebc9ba5-f831-40bb-a12f-1066b0159616 00:07:16.050 Node size: 16384 00:07:16.050 Sector size: 4096 00:07:16.050 Filesystem size: 510.00MiB 00:07:16.050 Block group profiles: 00:07:16.050 Data: single 8.00MiB 00:07:16.050 Metadata: DUP 32.00MiB 00:07:16.050 System: DUP 8.00MiB 00:07:16.050 SSD detected: yes 00:07:16.050 Zoned device: no 00:07:16.050 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:16.050 Runtime features: free-space-tree 00:07:16.050 Checksum: crc32c 00:07:16.050 Number of devices: 1 00:07:16.050 Devices: 00:07:16.050 ID SIZE PATH 00:07:16.050 1 510.00MiB /dev/nvme0n1p1 00:07:16.050 00:07:16.050 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # return 0 00:07:16.050 23:33:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 853454 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.987 00:07:16.987 real 0m1.147s 00:07:16.987 user 0m0.027s 00:07:16.987 sys 0m0.126s 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:16.987 ************************************ 00:07:16.987 END TEST filesystem_btrfs 00:07:16.987 ************************************ 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:16.987 ************************************ 00:07:16.987 START TEST filesystem_xfs 00:07:16.987 ************************************ 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:16.987 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:07:16.988 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:16.988 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@920 -- # local i=0 00:07:16.988 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@921 -- # local force 00:07:16.988 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:07:16.988 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # force=-f 00:07:16.988 23:33:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:16.988 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:16.988 = sectsz=512 attr=2, projid32bit=1 00:07:16.988 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:16.988 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:16.988 data = bsize=4096 blocks=130560, imaxpct=25 00:07:16.988 = sunit=0 swidth=0 blks 00:07:16.988 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:16.988 log =internal log bsize=4096 blocks=16384, version=2 00:07:16.988 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:16.988 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:17.935 Discarding blocks...Done. 00:07:17.935 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # return 0 00:07:17.935 23:33:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 853454 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:20.470 00:07:20.470 real 0m3.196s 00:07:20.470 user 0m0.018s 00:07:20.470 sys 0m0.079s 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:20.470 23:33:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:20.470 ************************************ 00:07:20.470 END TEST filesystem_xfs 00:07:20.470 ************************************ 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:20.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 853454 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 853454 ']' 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # kill -0 853454 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # uname 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 853454 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 853454' 00:07:20.470 killing process with pid 853454 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@961 -- # kill 853454 00:07:20.470 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # wait 853454 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:20.730 00:07:20.730 real 0m12.125s 00:07:20.730 user 0m47.570s 00:07:20.730 sys 0m1.236s 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 ************************************ 00:07:20.730 END TEST nvmf_filesystem_no_in_capsule 00:07:20.730 ************************************ 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 ************************************ 00:07:20.730 START TEST nvmf_filesystem_in_capsule 00:07:20.730 ************************************ 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 4096 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=856075 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 856075 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 856075 ']' 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:20.730 23:33:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.990 [2024-07-15 23:33:09.734409] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:20.990 [2024-07-15 23:33:09.734447] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.990 [2024-07-15 23:33:09.794009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.990 [2024-07-15 23:33:09.874846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.990 [2024-07-15 23:33:09.874884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.990 [2024-07-15 23:33:09.874891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.990 [2024-07-15 23:33:09.874897] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.990 [2024-07-15 23:33:09.874902] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.990 [2024-07-15 23:33:09.874945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.990 [2024-07-15 23:33:09.875029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.990 [2024-07-15 23:33:09.875104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.990 [2024-07-15 23:33:09.875105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 [2024-07-15 23:33:10.596199] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 Malloc1 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 [2024-07-15 23:33:10.749882] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:07:21.928 { 00:07:21.928 "name": "Malloc1", 00:07:21.928 "aliases": [ 00:07:21.928 "33fbfd6b-9a94-470c-b7c0-f56e021dc17b" 00:07:21.928 ], 00:07:21.928 "product_name": "Malloc disk", 00:07:21.928 "block_size": 512, 00:07:21.928 "num_blocks": 1048576, 00:07:21.928 "uuid": "33fbfd6b-9a94-470c-b7c0-f56e021dc17b", 00:07:21.928 "assigned_rate_limits": { 00:07:21.928 "rw_ios_per_sec": 0, 00:07:21.928 "rw_mbytes_per_sec": 0, 00:07:21.928 "r_mbytes_per_sec": 0, 00:07:21.928 "w_mbytes_per_sec": 0 00:07:21.928 }, 00:07:21.928 "claimed": true, 00:07:21.928 "claim_type": "exclusive_write", 00:07:21.928 "zoned": false, 00:07:21.928 "supported_io_types": { 00:07:21.928 "read": true, 00:07:21.928 "write": true, 00:07:21.928 "unmap": true, 00:07:21.928 "flush": true, 00:07:21.928 "reset": true, 00:07:21.928 "nvme_admin": false, 00:07:21.928 "nvme_io": false, 00:07:21.928 "nvme_io_md": false, 00:07:21.928 "write_zeroes": true, 00:07:21.928 "zcopy": true, 00:07:21.928 "get_zone_info": false, 00:07:21.928 "zone_management": false, 00:07:21.928 "zone_append": false, 00:07:21.928 "compare": false, 00:07:21.928 "compare_and_write": false, 00:07:21.928 "abort": true, 00:07:21.928 "seek_hole": false, 00:07:21.928 "seek_data": false, 00:07:21.928 "copy": true, 00:07:21.928 "nvme_iov_md": false 00:07:21.928 }, 00:07:21.928 "memory_domains": [ 00:07:21.928 { 00:07:21.928 "dma_device_id": "system", 00:07:21.928 "dma_device_type": 1 00:07:21.928 }, 00:07:21.928 { 00:07:21.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.928 "dma_device_type": 2 00:07:21.928 } 00:07:21.928 ], 00:07:21.928 "driver_specific": {} 00:07:21.928 } 00:07:21.928 ]' 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:21.928 23:33:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:23.301 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:23.301 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:07:23.301 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:23.301 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:23.301 23:33:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:07:25.200 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:07:25.200 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:07:25.200 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:07:25.200 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:07:25.200 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:07:25.200 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:25.201 23:33:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:25.201 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:25.201 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:25.459 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:26.026 23:33:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:26.960 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.961 ************************************ 00:07:26.961 START TEST filesystem_in_capsule_ext4 00:07:26.961 ************************************ 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@921 -- # local force 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:07:26.961 23:33:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:26.961 mke2fs 1.46.5 (30-Dec-2021) 00:07:27.220 Discarding device blocks: 0/522240 done 00:07:27.220 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:27.220 Filesystem UUID: 56a312b6-069a-4d3e-90a2-a89baee91e07 00:07:27.220 Superblock backups stored on blocks: 00:07:27.220 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:27.220 00:07:27.220 Allocating group tables: 0/64 done 00:07:27.220 Writing inode tables: 0/64 done 00:07:28.221 Creating journal (8192 blocks): done 00:07:28.221 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.221 00:07:28.221 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # return 0 00:07:28.221 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 856075 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.479 00:07:28.479 real 0m1.526s 00:07:28.479 user 0m0.024s 00:07:28.479 sys 0m0.068s 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:28.479 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:28.479 ************************************ 00:07:28.479 END TEST filesystem_in_capsule_ext4 00:07:28.479 ************************************ 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.738 ************************************ 00:07:28.738 START TEST filesystem_in_capsule_btrfs 00:07:28.738 ************************************ 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@921 -- # local force 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:07:28.738 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:28.995 btrfs-progs v6.6.2 00:07:28.995 See https://btrfs.readthedocs.io for more information. 00:07:28.995 00:07:28.995 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:28.995 NOTE: several default settings have changed in version 5.15, please make sure 00:07:28.995 this does not affect your deployments: 00:07:28.995 - DUP for metadata (-m dup) 00:07:28.995 - enabled no-holes (-O no-holes) 00:07:28.995 - enabled free-space-tree (-R free-space-tree) 00:07:28.995 00:07:28.996 Label: (null) 00:07:28.996 UUID: 9755a3cd-f7a9-4e75-a625-c2fc1d9d76e3 00:07:28.996 Node size: 16384 00:07:28.996 Sector size: 4096 00:07:28.996 Filesystem size: 510.00MiB 00:07:28.996 Block group profiles: 00:07:28.996 Data: single 8.00MiB 00:07:28.996 Metadata: DUP 32.00MiB 00:07:28.996 System: DUP 8.00MiB 00:07:28.996 SSD detected: yes 00:07:28.996 Zoned device: no 00:07:28.996 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:28.996 Runtime features: free-space-tree 00:07:28.996 Checksum: crc32c 00:07:28.996 Number of devices: 1 00:07:28.996 Devices: 00:07:28.996 ID SIZE PATH 00:07:28.996 1 510.00MiB /dev/nvme0n1p1 00:07:28.996 00:07:28.996 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # return 0 00:07:28.996 23:33:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 856075 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:29.932 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:29.933 00:07:29.933 real 0m1.095s 00:07:29.933 user 0m0.025s 00:07:29.933 sys 0m0.126s 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:29.933 ************************************ 00:07:29.933 END TEST filesystem_in_capsule_btrfs 00:07:29.933 ************************************ 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:29.933 ************************************ 00:07:29.933 START TEST filesystem_in_capsule_xfs 00:07:29.933 ************************************ 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@920 -- # local i=0 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@921 -- # local force 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # force=-f 00:07:29.933 23:33:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:29.933 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:29.933 = sectsz=512 attr=2, projid32bit=1 00:07:29.933 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:29.933 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:29.933 data = bsize=4096 blocks=130560, imaxpct=25 00:07:29.933 = sunit=0 swidth=0 blks 00:07:29.933 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:29.933 log =internal log bsize=4096 blocks=16384, version=2 00:07:29.933 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:29.933 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:30.867 Discarding blocks...Done. 00:07:30.867 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # return 0 00:07:30.867 23:33:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 856075 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:33.402 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:33.661 00:07:33.661 real 0m3.707s 00:07:33.661 user 0m0.021s 00:07:33.661 sys 0m0.076s 00:07:33.661 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:33.661 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:33.661 ************************************ 00:07:33.661 END TEST filesystem_in_capsule_xfs 00:07:33.661 ************************************ 00:07:33.661 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:07:33.661 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:33.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 856075 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 856075 ']' 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # kill -0 856075 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # uname 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:33.922 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 856075 00:07:34.181 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:34.181 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:34.181 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 856075' 00:07:34.181 killing process with pid 856075 00:07:34.181 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@961 -- # kill 856075 00:07:34.181 23:33:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # wait 856075 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:34.441 00:07:34.441 real 0m13.563s 00:07:34.441 user 0m53.337s 00:07:34.441 sys 0m1.239s 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:34.441 ************************************ 00:07:34.441 END TEST nvmf_filesystem_in_capsule 00:07:34.441 ************************************ 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:34.441 rmmod nvme_tcp 00:07:34.441 rmmod nvme_fabrics 00:07:34.441 rmmod nvme_keyring 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.441 23:33:23 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.978 23:33:25 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:36.978 00:07:36.978 real 0m33.531s 00:07:36.978 user 1m42.547s 00:07:36.978 sys 0m6.687s 00:07:36.978 23:33:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:36.978 23:33:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.978 ************************************ 00:07:36.978 END TEST nvmf_filesystem 00:07:36.978 ************************************ 00:07:36.978 23:33:25 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:36.978 23:33:25 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:36.978 23:33:25 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:36.978 23:33:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:36.978 23:33:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:36.978 ************************************ 00:07:36.978 START TEST nvmf_target_discovery 00:07:36.978 ************************************ 00:07:36.978 23:33:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:36.978 * Looking for test storage... 00:07:36.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:36.978 23:33:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.978 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:36.978 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.978 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:36.979 23:33:25 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:42.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:42.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:42.257 Found net devices under 0000:86:00.0: cvl_0_0 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:42.257 Found net devices under 0000:86:00.1: cvl_0_1 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.257 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.258 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.258 23:33:30 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:42.258 00:07:42.258 --- 10.0.0.2 ping statistics --- 00:07:42.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.258 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:07:42.258 00:07:42.258 --- 10.0.0.1 ping statistics --- 00:07:42.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.258 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=862082 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 862082 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@823 -- # '[' -z 862082 ']' 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:42.258 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:42.258 [2024-07-15 23:33:31.132965] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:42.258 [2024-07-15 23:33:31.133006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.258 [2024-07-15 23:33:31.190376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.517 [2024-07-15 23:33:31.263192] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.517 [2024-07-15 23:33:31.263236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.517 [2024-07-15 23:33:31.263242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.517 [2024-07-15 23:33:31.263248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.517 [2024-07-15 23:33:31.263253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.517 [2024-07-15 23:33:31.263300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.517 [2024-07-15 23:33:31.263381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.517 [2024-07-15 23:33:31.263465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.517 [2024-07-15 23:33:31.263466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # return 0 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 [2024-07-15 23:33:31.986153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 Null1 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 [2024-07-15 23:33:32.031695] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 Null2 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.086 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 Null3 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.345 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 Null4 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:43.346 00:07:43.346 Discovery Log Number of Records 6, Generation counter 6 00:07:43.346 =====Discovery Log Entry 0====== 00:07:43.346 trtype: tcp 00:07:43.346 adrfam: ipv4 00:07:43.346 subtype: current discovery subsystem 00:07:43.346 treq: not required 00:07:43.346 portid: 0 00:07:43.346 trsvcid: 4420 00:07:43.346 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.346 traddr: 10.0.0.2 00:07:43.346 eflags: explicit discovery connections, duplicate discovery information 00:07:43.346 sectype: none 00:07:43.346 =====Discovery Log Entry 1====== 00:07:43.346 trtype: tcp 00:07:43.346 adrfam: ipv4 00:07:43.346 subtype: nvme subsystem 00:07:43.346 treq: not required 00:07:43.346 portid: 0 00:07:43.346 trsvcid: 4420 00:07:43.346 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:43.346 traddr: 10.0.0.2 00:07:43.346 eflags: none 00:07:43.346 sectype: none 00:07:43.346 =====Discovery Log Entry 2====== 00:07:43.346 trtype: tcp 00:07:43.346 adrfam: ipv4 00:07:43.346 subtype: nvme subsystem 00:07:43.346 treq: not required 00:07:43.346 portid: 0 00:07:43.346 trsvcid: 4420 00:07:43.346 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:43.346 traddr: 10.0.0.2 00:07:43.346 eflags: none 00:07:43.346 sectype: none 00:07:43.346 =====Discovery Log Entry 3====== 00:07:43.346 trtype: tcp 00:07:43.346 adrfam: ipv4 00:07:43.346 subtype: nvme subsystem 00:07:43.346 treq: not required 00:07:43.346 portid: 0 00:07:43.346 trsvcid: 4420 00:07:43.346 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:43.346 traddr: 10.0.0.2 00:07:43.346 eflags: none 00:07:43.346 sectype: none 00:07:43.346 =====Discovery Log Entry 4====== 00:07:43.346 trtype: tcp 00:07:43.346 adrfam: ipv4 00:07:43.346 subtype: nvme subsystem 00:07:43.346 treq: not required 00:07:43.346 portid: 0 00:07:43.346 trsvcid: 4420 00:07:43.346 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:43.346 traddr: 10.0.0.2 00:07:43.346 eflags: none 00:07:43.346 sectype: none 00:07:43.346 =====Discovery Log Entry 5====== 00:07:43.346 trtype: tcp 00:07:43.346 adrfam: ipv4 00:07:43.346 subtype: discovery subsystem referral 00:07:43.346 treq: not required 00:07:43.346 portid: 0 00:07:43.346 trsvcid: 4430 00:07:43.346 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:43.346 traddr: 10.0.0.2 00:07:43.346 eflags: none 00:07:43.346 sectype: none 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:43.346 Perform nvmf subsystem discovery via RPC 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.346 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.346 [ 00:07:43.346 { 00:07:43.346 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:43.346 "subtype": "Discovery", 00:07:43.346 "listen_addresses": [ 00:07:43.346 { 00:07:43.346 "trtype": "TCP", 00:07:43.346 "adrfam": "IPv4", 00:07:43.346 "traddr": "10.0.0.2", 00:07:43.346 "trsvcid": "4420" 00:07:43.346 } 00:07:43.346 ], 00:07:43.346 "allow_any_host": true, 00:07:43.346 "hosts": [] 00:07:43.346 }, 00:07:43.346 { 00:07:43.346 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:43.346 "subtype": "NVMe", 00:07:43.346 "listen_addresses": [ 00:07:43.346 { 00:07:43.346 "trtype": "TCP", 00:07:43.346 "adrfam": "IPv4", 00:07:43.346 "traddr": "10.0.0.2", 00:07:43.346 "trsvcid": "4420" 00:07:43.346 } 00:07:43.346 ], 00:07:43.346 "allow_any_host": true, 00:07:43.346 "hosts": [], 00:07:43.346 "serial_number": "SPDK00000000000001", 00:07:43.346 "model_number": "SPDK bdev Controller", 00:07:43.346 "max_namespaces": 32, 00:07:43.346 "min_cntlid": 1, 00:07:43.346 "max_cntlid": 65519, 00:07:43.346 "namespaces": [ 00:07:43.346 { 00:07:43.346 "nsid": 1, 00:07:43.346 "bdev_name": "Null1", 00:07:43.346 "name": "Null1", 00:07:43.346 "nguid": "BE818792249744309FD7BD892C840D97", 00:07:43.346 "uuid": "be818792-2497-4430-9fd7-bd892c840d97" 00:07:43.346 } 00:07:43.346 ] 00:07:43.346 }, 00:07:43.346 { 00:07:43.346 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:43.346 "subtype": "NVMe", 00:07:43.346 "listen_addresses": [ 00:07:43.346 { 00:07:43.346 "trtype": "TCP", 00:07:43.346 "adrfam": "IPv4", 00:07:43.346 "traddr": "10.0.0.2", 00:07:43.346 "trsvcid": "4420" 00:07:43.346 } 00:07:43.346 ], 00:07:43.346 "allow_any_host": true, 00:07:43.346 "hosts": [], 00:07:43.346 "serial_number": "SPDK00000000000002", 00:07:43.346 "model_number": "SPDK bdev Controller", 00:07:43.346 "max_namespaces": 32, 00:07:43.346 "min_cntlid": 1, 00:07:43.346 "max_cntlid": 65519, 00:07:43.346 "namespaces": [ 00:07:43.346 { 00:07:43.346 "nsid": 1, 00:07:43.346 "bdev_name": "Null2", 00:07:43.346 "name": "Null2", 00:07:43.346 "nguid": "8BC2B4D5E068411CA15123B92F3AF4FE", 00:07:43.346 "uuid": "8bc2b4d5-e068-411c-a151-23b92f3af4fe" 00:07:43.346 } 00:07:43.346 ] 00:07:43.346 }, 00:07:43.346 { 00:07:43.346 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:43.346 "subtype": "NVMe", 00:07:43.346 "listen_addresses": [ 00:07:43.346 { 00:07:43.346 "trtype": "TCP", 00:07:43.346 "adrfam": "IPv4", 00:07:43.346 "traddr": "10.0.0.2", 00:07:43.346 "trsvcid": "4420" 00:07:43.346 } 00:07:43.346 ], 00:07:43.346 "allow_any_host": true, 00:07:43.346 "hosts": [], 00:07:43.346 "serial_number": "SPDK00000000000003", 00:07:43.346 "model_number": "SPDK bdev Controller", 00:07:43.346 "max_namespaces": 32, 00:07:43.346 "min_cntlid": 1, 00:07:43.346 "max_cntlid": 65519, 00:07:43.346 "namespaces": [ 00:07:43.346 { 00:07:43.346 "nsid": 1, 00:07:43.346 "bdev_name": "Null3", 00:07:43.346 "name": "Null3", 00:07:43.346 "nguid": "396DE63959394CF589AFCC5E1C051E64", 00:07:43.346 "uuid": "396de639-5939-4cf5-89af-cc5e1c051e64" 00:07:43.346 } 00:07:43.346 ] 00:07:43.346 }, 00:07:43.346 { 00:07:43.346 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:43.346 "subtype": "NVMe", 00:07:43.346 "listen_addresses": [ 00:07:43.346 { 00:07:43.346 "trtype": "TCP", 00:07:43.346 "adrfam": "IPv4", 00:07:43.346 "traddr": "10.0.0.2", 00:07:43.346 "trsvcid": "4420" 00:07:43.346 } 00:07:43.346 ], 00:07:43.346 "allow_any_host": true, 00:07:43.346 "hosts": [], 00:07:43.346 "serial_number": "SPDK00000000000004", 00:07:43.346 "model_number": "SPDK bdev Controller", 00:07:43.346 "max_namespaces": 32, 00:07:43.346 "min_cntlid": 1, 00:07:43.346 "max_cntlid": 65519, 00:07:43.346 "namespaces": [ 00:07:43.346 { 00:07:43.346 "nsid": 1, 00:07:43.346 "bdev_name": "Null4", 00:07:43.346 "name": "Null4", 00:07:43.346 "nguid": "016A8F51A62845DC804BE5D9B95F02A0", 00:07:43.346 "uuid": "016a8f51-a628-45dc-804b-e5d9b95f02a0" 00:07:43.346 } 00:07:43.346 ] 00:07:43.346 } 00:07:43.346 ] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.347 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.606 rmmod nvme_tcp 00:07:43.606 rmmod nvme_fabrics 00:07:43.606 rmmod nvme_keyring 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 862082 ']' 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 862082 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@942 -- # '[' -z 862082 ']' 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # kill -0 862082 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # uname 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 862082 00:07:43.606 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:43.607 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:43.607 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 862082' 00:07:43.607 killing process with pid 862082 00:07:43.607 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@961 -- # kill 862082 00:07:43.607 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # wait 862082 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.867 23:33:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.407 23:33:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:46.407 00:07:46.407 real 0m9.277s 00:07:46.407 user 0m7.351s 00:07:46.407 sys 0m4.493s 00:07:46.407 23:33:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:46.407 23:33:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:46.407 ************************************ 00:07:46.407 END TEST nvmf_target_discovery 00:07:46.407 ************************************ 00:07:46.407 23:33:34 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:46.407 23:33:34 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:46.407 23:33:34 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:46.407 23:33:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:46.407 23:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.407 ************************************ 00:07:46.407 START TEST nvmf_referrals 00:07:46.407 ************************************ 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:46.407 * Looking for test storage... 00:07:46.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.407 23:33:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:46.408 23:33:34 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:51.720 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:51.720 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.720 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:51.721 Found net devices under 0000:86:00.0: cvl_0_0 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:51.721 Found net devices under 0000:86:00.1: cvl_0_1 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.721 23:33:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:07:51.721 00:07:51.721 --- 10.0.0.2 ping statistics --- 00:07:51.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.721 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:07:51.721 00:07:51.721 --- 10.0.0.1 ping statistics --- 00:07:51.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.721 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=865738 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 865738 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@823 -- # '[' -z 865738 ']' 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:51.721 23:33:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:51.721 [2024-07-15 23:33:40.282574] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:51.721 [2024-07-15 23:33:40.282619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.721 [2024-07-15 23:33:40.340381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.721 [2024-07-15 23:33:40.422010] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.721 [2024-07-15 23:33:40.422044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.721 [2024-07-15 23:33:40.422051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.721 [2024-07-15 23:33:40.422057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.721 [2024-07-15 23:33:40.422062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.721 [2024-07-15 23:33:40.422120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.721 [2024-07-15 23:33:40.422136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.721 [2024-07-15 23:33:40.422221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.721 [2024-07-15 23:33:40.422222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # return 0 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.289 [2024-07-15 23:33:41.138367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.289 [2024-07-15 23:33:41.151710] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.289 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.290 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.549 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:52.808 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.066 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:53.067 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:53.067 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:53.067 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:53.067 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.067 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.067 23:33:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:53.325 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.326 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.584 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.842 rmmod nvme_tcp 00:07:53.842 rmmod nvme_fabrics 00:07:53.842 rmmod nvme_keyring 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 865738 ']' 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 865738 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@942 -- # '[' -z 865738 ']' 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # kill -0 865738 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # uname 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:53.842 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 865738 00:07:54.134 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:54.134 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:54.134 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@960 -- # echo 'killing process with pid 865738' 00:07:54.134 killing process with pid 865738 00:07:54.134 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@961 -- # kill 865738 00:07:54.134 23:33:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # wait 865738 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.134 23:33:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.662 23:33:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:56.662 00:07:56.662 real 0m10.257s 00:07:56.662 user 0m12.723s 00:07:56.662 sys 0m4.566s 00:07:56.662 23:33:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:56.662 23:33:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:56.662 ************************************ 00:07:56.662 END TEST nvmf_referrals 00:07:56.662 ************************************ 00:07:56.662 23:33:45 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:07:56.662 23:33:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:56.662 23:33:45 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:56.662 23:33:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:56.662 23:33:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:56.662 ************************************ 00:07:56.662 START TEST nvmf_connect_disconnect 00:07:56.662 ************************************ 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:56.662 * Looking for test storage... 00:07:56.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:56.662 23:33:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.936 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:01.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:01.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:01.937 Found net devices under 0000:86:00.0: cvl_0_0 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:01.937 Found net devices under 0000:86:00.1: cvl_0_1 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:01.937 23:33:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:01.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:08:01.937 00:08:01.937 --- 10.0.0.2 ping statistics --- 00:08:01.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.937 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:08:01.937 00:08:01.937 --- 10.0.0.1 ping statistics --- 00:08:01.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.937 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=869714 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 869714 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@823 -- # '[' -z 869714 ']' 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:01.937 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:01.937 [2024-07-15 23:33:50.171615] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:01.937 [2024-07-15 23:33:50.171658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.937 [2024-07-15 23:33:50.227465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.937 [2024-07-15 23:33:50.308131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.937 [2024-07-15 23:33:50.308169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.937 [2024-07-15 23:33:50.308176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.937 [2024-07-15 23:33:50.308183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.937 [2024-07-15 23:33:50.308188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.937 [2024-07-15 23:33:50.308548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.937 [2024-07-15 23:33:50.308586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.937 [2024-07-15 23:33:50.308565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.937 [2024-07-15 23:33:50.308584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.197 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:02.197 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # return 0 00:08:02.197 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.197 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.197 23:33:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.197 [2024-07-15 23:33:51.016301] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:02.197 [2024-07-15 23:33:51.072173] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:02.197 23:33:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:05.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.682 rmmod nvme_tcp 00:08:18.682 rmmod nvme_fabrics 00:08:18.682 rmmod nvme_keyring 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 869714 ']' 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 869714 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@942 -- # '[' -z 869714 ']' 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # kill -0 869714 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # uname 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 869714 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 869714' 00:08:18.682 killing process with pid 869714 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@961 -- # kill 869714 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # wait 869714 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.682 23:34:07 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.223 23:34:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:21.223 00:08:21.223 real 0m24.538s 00:08:21.223 user 1m10.130s 00:08:21.223 sys 0m4.740s 00:08:21.223 23:34:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:21.223 23:34:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 ************************************ 00:08:21.223 END TEST nvmf_connect_disconnect 00:08:21.223 ************************************ 00:08:21.223 23:34:09 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:21.223 23:34:09 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:21.223 23:34:09 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:21.223 23:34:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:21.223 23:34:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:21.223 ************************************ 00:08:21.223 START TEST nvmf_multitarget 00:08:21.223 ************************************ 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:21.223 * Looking for test storage... 00:08:21.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.223 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:21.224 23:34:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:26.500 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:26.501 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:26.501 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:26.501 Found net devices under 0000:86:00.0: cvl_0_0 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:26.501 Found net devices under 0000:86:00.1: cvl_0_1 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:26.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:08:26.501 00:08:26.501 --- 10.0.0.2 ping statistics --- 00:08:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.501 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:08:26.501 00:08:26.501 --- 10.0.0.1 ping statistics --- 00:08:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.501 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=875946 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 875946 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@823 -- # '[' -z 875946 ']' 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:26.501 23:34:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:26.501 [2024-07-15 23:34:14.937879] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:26.502 [2024-07-15 23:34:14.937925] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.502 [2024-07-15 23:34:14.997496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.502 [2024-07-15 23:34:15.078843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.502 [2024-07-15 23:34:15.078877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.502 [2024-07-15 23:34:15.078884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.502 [2024-07-15 23:34:15.078891] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.502 [2024-07-15 23:34:15.078895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.502 [2024-07-15 23:34:15.078938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.502 [2024-07-15 23:34:15.078964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.502 [2024-07-15 23:34:15.079050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.502 [2024-07-15 23:34:15.079051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # return 0 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:27.069 "nvmf_tgt_1" 00:08:27.069 23:34:15 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:27.328 "nvmf_tgt_2" 00:08:27.328 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:27.328 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:27.328 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:27.328 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:27.328 true 00:08:27.328 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:27.586 true 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.586 rmmod nvme_tcp 00:08:27.586 rmmod nvme_fabrics 00:08:27.586 rmmod nvme_keyring 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 875946 ']' 00:08:27.586 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 875946 00:08:27.587 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@942 -- # '[' -z 875946 ']' 00:08:27.587 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # kill -0 875946 00:08:27.587 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # uname 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 875946 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@960 -- # echo 'killing process with pid 875946' 00:08:27.845 killing process with pid 875946 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@961 -- # kill 875946 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # wait 875946 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.845 23:34:16 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.381 23:34:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.381 00:08:30.381 real 0m9.079s 00:08:30.381 user 0m8.868s 00:08:30.381 sys 0m4.269s 00:08:30.381 23:34:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:30.381 23:34:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:30.381 ************************************ 00:08:30.381 END TEST nvmf_multitarget 00:08:30.381 ************************************ 00:08:30.381 23:34:18 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:08:30.381 23:34:18 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:30.381 23:34:18 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:30.381 23:34:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:30.381 23:34:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.381 ************************************ 00:08:30.381 START TEST nvmf_rpc 00:08:30.381 ************************************ 00:08:30.381 23:34:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:30.381 * Looking for test storage... 00:08:30.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.381 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.382 23:34:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:35.662 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:35.662 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:35.662 Found net devices under 0000:86:00.0: cvl_0_0 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:35.662 Found net devices under 0000:86:00.1: cvl_0_1 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:08:35.662 00:08:35.662 --- 10.0.0.2 ping statistics --- 00:08:35.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.662 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:08:35.662 00:08:35.662 --- 10.0.0.1 ping statistics --- 00:08:35.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.662 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=879671 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 879671 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@823 -- # '[' -z 879671 ']' 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:35.662 23:34:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:35.662 [2024-07-15 23:34:24.461113] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:35.662 [2024-07-15 23:34:24.461173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.662 [2024-07-15 23:34:24.519175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.662 [2024-07-15 23:34:24.600668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.662 [2024-07-15 23:34:24.600701] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.662 [2024-07-15 23:34:24.600707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.662 [2024-07-15 23:34:24.600713] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.662 [2024-07-15 23:34:24.600718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.662 [2024-07-15 23:34:24.600759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.662 [2024-07-15 23:34:24.600793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.662 [2024-07-15 23:34:24.600902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.662 [2024-07-15 23:34:24.600903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # return 0 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:36.602 "tick_rate": 2300000000, 00:08:36.602 "poll_groups": [ 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_000", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [] 00:08:36.602 }, 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_001", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [] 00:08:36.602 }, 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_002", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [] 00:08:36.602 }, 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_003", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [] 00:08:36.602 } 00:08:36.602 ] 00:08:36.602 }' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.602 [2024-07-15 23:34:25.421620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:36.602 "tick_rate": 2300000000, 00:08:36.602 "poll_groups": [ 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_000", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [ 00:08:36.602 { 00:08:36.602 "trtype": "TCP" 00:08:36.602 } 00:08:36.602 ] 00:08:36.602 }, 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_001", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [ 00:08:36.602 { 00:08:36.602 "trtype": "TCP" 00:08:36.602 } 00:08:36.602 ] 00:08:36.602 }, 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_002", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [ 00:08:36.602 { 00:08:36.602 "trtype": "TCP" 00:08:36.602 } 00:08:36.602 ] 00:08:36.602 }, 00:08:36.602 { 00:08:36.602 "name": "nvmf_tgt_poll_group_003", 00:08:36.602 "admin_qpairs": 0, 00:08:36.602 "io_qpairs": 0, 00:08:36.602 "current_admin_qpairs": 0, 00:08:36.602 "current_io_qpairs": 0, 00:08:36.602 "pending_bdev_io": 0, 00:08:36.602 "completed_nvme_io": 0, 00:08:36.602 "transports": [ 00:08:36.602 { 00:08:36.602 "trtype": "TCP" 00:08:36.602 } 00:08:36.602 ] 00:08:36.602 } 00:08:36.602 ] 00:08:36.602 }' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:36.602 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.603 Malloc1 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.603 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.863 [2024-07-15 23:34:25.593802] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:08:36.863 [2024-07-15 23:34:25.618451] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:36.863 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:36.863 could not add new controller: failed to write to nvme-fabrics device 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:36.863 23:34:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:37.803 23:34:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:37.803 23:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:37.804 23:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:37.804 23:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:37.804 23:34:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:40.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.375 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.376 [2024-07-15 23:34:28.892418] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:08:40.376 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:40.376 could not add new controller: failed to write to nvme-fabrics device 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:40.376 23:34:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:41.313 23:34:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:41.313 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:41.313 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:41.313 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:41.313 23:34:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:43.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.220 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.480 [2024-07-15 23:34:32.248109] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:43.480 23:34:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:44.417 23:34:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:44.417 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:44.417 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:44.417 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:44.417 23:34:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.951 [2024-07-15 23:34:35.541484] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:46.951 23:34:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.889 23:34:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.890 23:34:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:47.890 23:34:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.890 23:34:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:47.890 23:34:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:49.793 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:49.793 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:49.793 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.793 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:49.793 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.793 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:49.793 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 [2024-07-15 23:34:38.892134] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:50.052 23:34:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.431 23:34:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.431 23:34:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:51.431 23:34:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.431 23:34:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:51.431 23:34:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.338 [2024-07-15 23:34:42.236482] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:53.338 23:34:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.717 23:34:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.717 23:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:54.717 23:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.717 23:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:54.717 23:34:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:56.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:56.622 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 [2024-07-15 23:34:45.611859] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:56.881 23:34:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.303 23:34:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.303 23:34:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:08:58.303 23:34:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.303 23:34:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:58.303 23:34:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.205 [2024-07-15 23:34:48.963005] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.205 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 [2024-07-15 23:34:49.011120] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 [2024-07-15 23:34:49.063288] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 [2024-07-15 23:34:49.111446] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 [2024-07-15 23:34:49.159615] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.206 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:00.466 "tick_rate": 2300000000, 00:09:00.466 "poll_groups": [ 00:09:00.466 { 00:09:00.466 "name": "nvmf_tgt_poll_group_000", 00:09:00.466 "admin_qpairs": 2, 00:09:00.466 "io_qpairs": 168, 00:09:00.466 "current_admin_qpairs": 0, 00:09:00.466 "current_io_qpairs": 0, 00:09:00.466 "pending_bdev_io": 0, 00:09:00.466 "completed_nvme_io": 267, 00:09:00.466 "transports": [ 00:09:00.466 { 00:09:00.466 "trtype": "TCP" 00:09:00.466 } 00:09:00.466 ] 00:09:00.466 }, 00:09:00.466 { 00:09:00.466 "name": "nvmf_tgt_poll_group_001", 00:09:00.466 "admin_qpairs": 2, 00:09:00.466 "io_qpairs": 168, 00:09:00.466 "current_admin_qpairs": 0, 00:09:00.466 "current_io_qpairs": 0, 00:09:00.466 "pending_bdev_io": 0, 00:09:00.466 "completed_nvme_io": 220, 00:09:00.466 "transports": [ 00:09:00.466 { 00:09:00.466 "trtype": "TCP" 00:09:00.466 } 00:09:00.466 ] 00:09:00.466 }, 00:09:00.466 { 00:09:00.466 "name": "nvmf_tgt_poll_group_002", 00:09:00.466 "admin_qpairs": 1, 00:09:00.466 "io_qpairs": 168, 00:09:00.466 "current_admin_qpairs": 0, 00:09:00.466 "current_io_qpairs": 0, 00:09:00.466 "pending_bdev_io": 0, 00:09:00.466 "completed_nvme_io": 267, 00:09:00.466 "transports": [ 00:09:00.466 { 00:09:00.466 "trtype": "TCP" 00:09:00.466 } 00:09:00.466 ] 00:09:00.466 }, 00:09:00.466 { 00:09:00.466 "name": "nvmf_tgt_poll_group_003", 00:09:00.466 "admin_qpairs": 2, 00:09:00.466 "io_qpairs": 168, 00:09:00.466 "current_admin_qpairs": 0, 00:09:00.466 "current_io_qpairs": 0, 00:09:00.466 "pending_bdev_io": 0, 00:09:00.466 "completed_nvme_io": 268, 00:09:00.466 "transports": [ 00:09:00.466 { 00:09:00.466 "trtype": "TCP" 00:09:00.466 } 00:09:00.466 ] 00:09:00.466 } 00:09:00.466 ] 00:09:00.466 }' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.466 rmmod nvme_tcp 00:09:00.466 rmmod nvme_fabrics 00:09:00.466 rmmod nvme_keyring 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 879671 ']' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 879671 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@942 -- # '[' -z 879671 ']' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # kill -0 879671 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # uname 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 879671 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 879671' 00:09:00.466 killing process with pid 879671 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@961 -- # kill 879671 00:09:00.466 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # wait 879671 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.725 23:34:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.260 23:34:51 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:03.260 00:09:03.260 real 0m32.785s 00:09:03.260 user 1m41.401s 00:09:03.260 sys 0m5.834s 00:09:03.260 23:34:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:03.260 23:34:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.260 ************************************ 00:09:03.260 END TEST nvmf_rpc 00:09:03.260 ************************************ 00:09:03.260 23:34:51 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:03.260 23:34:51 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:03.260 23:34:51 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:03.260 23:34:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:03.260 23:34:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.260 ************************************ 00:09:03.260 START TEST nvmf_invalid 00:09:03.260 ************************************ 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:03.260 * Looking for test storage... 00:09:03.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:03.260 23:34:51 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:08.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:08.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:08.532 Found net devices under 0000:86:00.0: cvl_0_0 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:08.532 Found net devices under 0000:86:00.1: cvl_0_1 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.532 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:09:08.533 00:09:08.533 --- 10.0.0.2 ping statistics --- 00:09:08.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.533 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:09:08.533 00:09:08.533 --- 10.0.0.1 ping statistics --- 00:09:08.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.533 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=887499 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 887499 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@823 -- # '[' -z 887499 ']' 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.533 23:34:57 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:08.533 [2024-07-15 23:34:57.353063] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:09:08.533 [2024-07-15 23:34:57.353106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.533 [2024-07-15 23:34:57.411454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:08.533 [2024-07-15 23:34:57.491847] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.533 [2024-07-15 23:34:57.491881] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.533 [2024-07-15 23:34:57.491888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.533 [2024-07-15 23:34:57.491894] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.533 [2024-07-15 23:34:57.491899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.533 [2024-07-15 23:34:57.491939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.533 [2024-07-15 23:34:57.491958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.533 [2024-07-15 23:34:57.492027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.533 [2024-07-15 23:34:57.492028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # return 0 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode25189 00:09:09.467 [2024-07-15 23:34:58.367609] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:09.467 { 00:09:09.467 "nqn": "nqn.2016-06.io.spdk:cnode25189", 00:09:09.467 "tgt_name": "foobar", 00:09:09.467 "method": "nvmf_create_subsystem", 00:09:09.467 "req_id": 1 00:09:09.467 } 00:09:09.467 Got JSON-RPC error response 00:09:09.467 response: 00:09:09.467 { 00:09:09.467 "code": -32603, 00:09:09.467 "message": "Unable to find target foobar" 00:09:09.467 }' 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:09.467 { 00:09:09.467 "nqn": "nqn.2016-06.io.spdk:cnode25189", 00:09:09.467 "tgt_name": "foobar", 00:09:09.467 "method": "nvmf_create_subsystem", 00:09:09.467 "req_id": 1 00:09:09.467 } 00:09:09.467 Got JSON-RPC error response 00:09:09.467 response: 00:09:09.467 { 00:09:09.467 "code": -32603, 00:09:09.467 "message": "Unable to find target foobar" 00:09:09.467 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:09.467 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15837 00:09:09.726 [2024-07-15 23:34:58.556292] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15837: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:09.726 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:09.726 { 00:09:09.726 "nqn": "nqn.2016-06.io.spdk:cnode15837", 00:09:09.726 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:09.726 "method": "nvmf_create_subsystem", 00:09:09.726 "req_id": 1 00:09:09.726 } 00:09:09.726 Got JSON-RPC error response 00:09:09.726 response: 00:09:09.726 { 00:09:09.726 "code": -32602, 00:09:09.726 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:09.726 }' 00:09:09.726 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:09.726 { 00:09:09.726 "nqn": "nqn.2016-06.io.spdk:cnode15837", 00:09:09.726 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:09.726 "method": "nvmf_create_subsystem", 00:09:09.726 "req_id": 1 00:09:09.726 } 00:09:09.726 Got JSON-RPC error response 00:09:09.726 response: 00:09:09.726 { 00:09:09.726 "code": -32602, 00:09:09.726 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:09.726 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:09.726 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:09.726 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11577 00:09:09.986 [2024-07-15 23:34:58.744898] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11577: invalid model number 'SPDK_Controller' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:09.986 { 00:09:09.986 "nqn": "nqn.2016-06.io.spdk:cnode11577", 00:09:09.986 "model_number": "SPDK_Controller\u001f", 00:09:09.986 "method": "nvmf_create_subsystem", 00:09:09.986 "req_id": 1 00:09:09.986 } 00:09:09.986 Got JSON-RPC error response 00:09:09.986 response: 00:09:09.986 { 00:09:09.986 "code": -32602, 00:09:09.986 "message": "Invalid MN SPDK_Controller\u001f" 00:09:09.986 }' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:09.986 { 00:09:09.986 "nqn": "nqn.2016-06.io.spdk:cnode11577", 00:09:09.986 "model_number": "SPDK_Controller\u001f", 00:09:09.986 "method": "nvmf_create_subsystem", 00:09:09.986 "req_id": 1 00:09:09.986 } 00:09:09.986 Got JSON-RPC error response 00:09:09.986 response: 00:09:09.986 { 00:09:09.986 "code": -32602, 00:09:09.986 "message": "Invalid MN SPDK_Controller\u001f" 00:09:09.986 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:09.986 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Iqbl|d=#4)yZk(yW(MI B' 00:09:09.987 23:34:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Iqbl|d=#4)yZk(yW(MI B' nqn.2016-06.io.spdk:cnode21059 00:09:10.246 [2024-07-15 23:34:59.049953] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21059: invalid serial number 'Iqbl|d=#4)yZk(yW(MI B' 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:10.246 { 00:09:10.246 "nqn": "nqn.2016-06.io.spdk:cnode21059", 00:09:10.246 "serial_number": "Iqbl|d=#4)yZk(yW(MI B", 00:09:10.246 "method": "nvmf_create_subsystem", 00:09:10.246 "req_id": 1 00:09:10.246 } 00:09:10.246 Got JSON-RPC error response 00:09:10.246 response: 00:09:10.246 { 00:09:10.246 "code": -32602, 00:09:10.246 "message": "Invalid SN Iqbl|d=#4)yZk(yW(MI B" 00:09:10.246 }' 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:10.246 { 00:09:10.246 "nqn": "nqn.2016-06.io.spdk:cnode21059", 00:09:10.246 "serial_number": "Iqbl|d=#4)yZk(yW(MI B", 00:09:10.246 "method": "nvmf_create_subsystem", 00:09:10.246 "req_id": 1 00:09:10.246 } 00:09:10.246 Got JSON-RPC error response 00:09:10.246 response: 00:09:10.246 { 00:09:10.246 "code": -32602, 00:09:10.246 "message": "Invalid SN Iqbl|d=#4)yZk(yW(MI B" 00:09:10.246 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:10.246 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:10.247 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.248 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:10.506 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ U == \- ]] 00:09:10.507 23:34:59 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'UCe]9qQRxE,P"bX /4b7 /dev/null' 00:09:12.580 23:35:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.132 23:35:03 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:15.132 00:09:15.132 real 0m11.724s 00:09:15.132 user 0m19.531s 00:09:15.132 sys 0m5.035s 00:09:15.132 23:35:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:15.132 23:35:03 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:15.132 ************************************ 00:09:15.132 END TEST nvmf_invalid 00:09:15.132 ************************************ 00:09:15.132 23:35:03 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:15.132 23:35:03 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:15.132 23:35:03 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:15.132 23:35:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:15.132 23:35:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:15.132 ************************************ 00:09:15.132 START TEST nvmf_abort 00:09:15.132 ************************************ 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:15.132 * Looking for test storage... 00:09:15.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:15.132 23:35:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.435 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:20.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:20.436 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:20.436 Found net devices under 0000:86:00.0: cvl_0_0 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:20.436 Found net devices under 0000:86:00.1: cvl_0_1 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:09:20.436 00:09:20.436 --- 10.0.0.2 ping statistics --- 00:09:20.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.436 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:09:20.436 00:09:20.436 --- 10.0.0.1 ping statistics --- 00:09:20.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.436 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=891656 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 891656 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@823 -- # '[' -z 891656 ']' 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:20.436 23:35:08 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.436 [2024-07-15 23:35:08.778969] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:09:20.436 [2024-07-15 23:35:08.779009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.436 [2024-07-15 23:35:08.834844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.436 [2024-07-15 23:35:08.914407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.436 [2024-07-15 23:35:08.914441] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.436 [2024-07-15 23:35:08.914448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.436 [2024-07-15 23:35:08.914455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.436 [2024-07-15 23:35:08.914460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.436 [2024-07-15 23:35:08.914499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.436 [2024-07-15 23:35:08.914580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.436 [2024-07-15 23:35:08.914582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # return 0 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.694 [2024-07-15 23:35:09.626942] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:20.694 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 Malloc0 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 Delay0 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 [2024-07-15 23:35:09.704605] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:20.952 23:35:09 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:20.952 [2024-07-15 23:35:09.858380] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:23.489 Initializing NVMe Controllers 00:09:23.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:23.489 controller IO queue size 128 less than required 00:09:23.489 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:23.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:23.489 Initialization complete. Launching workers. 00:09:23.489 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 42220 00:09:23.489 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42285, failed to submit 62 00:09:23.489 success 42224, unsuccess 61, failed 0 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.489 23:35:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.489 rmmod nvme_tcp 00:09:23.489 rmmod nvme_fabrics 00:09:23.489 rmmod nvme_keyring 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 891656 ']' 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 891656 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@942 -- # '[' -z 891656 ']' 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # kill -0 891656 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # uname 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 891656 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 891656' 00:09:23.489 killing process with pid 891656 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@961 -- # kill 891656 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # wait 891656 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.489 23:35:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.409 23:35:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:25.409 00:09:25.409 real 0m10.746s 00:09:25.409 user 0m13.025s 00:09:25.409 sys 0m4.834s 00:09:25.409 23:35:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:25.409 23:35:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:25.409 ************************************ 00:09:25.409 END TEST nvmf_abort 00:09:25.409 ************************************ 00:09:25.409 23:35:14 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:09:25.409 23:35:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:25.409 23:35:14 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:25.409 23:35:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:25.409 23:35:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:25.667 ************************************ 00:09:25.667 START TEST nvmf_ns_hotplug_stress 00:09:25.667 ************************************ 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:25.667 * Looking for test storage... 00:09:25.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.667 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:25.668 23:35:14 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:30.944 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:30.944 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.944 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:30.945 Found net devices under 0000:86:00.0: cvl_0_0 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:30.945 Found net devices under 0000:86:00.1: cvl_0_1 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.945 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.203 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.203 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.203 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.203 23:35:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.203 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:09:31.204 00:09:31.204 --- 10.0.0.2 ping statistics --- 00:09:31.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.204 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:31.204 00:09:31.204 --- 10.0.0.1 ping statistics --- 00:09:31.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.204 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=895691 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 895691 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@823 -- # '[' -z 895691 ']' 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:31.204 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:31.204 [2024-07-15 23:35:20.132741] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:09:31.204 [2024-07-15 23:35:20.132782] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.463 [2024-07-15 23:35:20.191778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.463 [2024-07-15 23:35:20.270513] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.463 [2024-07-15 23:35:20.270548] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.463 [2024-07-15 23:35:20.270555] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.463 [2024-07-15 23:35:20.270561] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.463 [2024-07-15 23:35:20.270566] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.463 [2024-07-15 23:35:20.270604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.463 [2024-07-15 23:35:20.270692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.463 [2024-07-15 23:35:20.270693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # return 0 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:32.032 23:35:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:32.291 [2024-07-15 23:35:21.131786] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.291 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:32.550 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.550 [2024-07-15 23:35:21.489075] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.550 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:32.809 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:33.067 Malloc0 00:09:33.068 23:35:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:33.324 Delay0 00:09:33.324 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.324 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:33.581 NULL1 00:09:33.581 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:33.838 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:33.838 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=896155 00:09:33.838 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:33.838 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.838 Read completed with error (sct=0, sc=11) 00:09:33.838 23:35:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.097 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:34.097 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:34.358 true 00:09:34.358 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:34.358 23:35:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.292 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.292 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:35.292 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:35.550 true 00:09:35.550 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:35.550 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.809 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.809 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:35.809 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:36.067 true 00:09:36.067 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:36.067 23:35:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.460 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.460 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.460 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:37.460 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:37.460 true 00:09:37.718 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:37.718 23:35:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.284 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.543 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:38.543 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:38.802 true 00:09:38.802 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:38.802 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.059 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.059 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:39.059 23:35:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:39.319 true 00:09:39.319 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:39.319 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.627 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.627 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:39.627 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:39.886 true 00:09:39.886 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:39.886 23:35:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.820 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.820 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:40.820 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:41.079 true 00:09:41.079 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:41.079 23:35:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.338 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.596 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:41.596 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:41.596 true 00:09:41.596 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:41.596 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.856 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.116 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:42.116 23:35:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:42.116 true 00:09:42.116 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:42.116 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.375 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.634 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:42.634 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:42.634 true 00:09:42.634 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:42.634 23:35:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.012 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.012 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.012 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:44.012 23:35:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:44.271 true 00:09:44.271 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:44.271 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.205 23:35:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.205 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:45.205 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:45.464 true 00:09:45.464 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:45.464 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.723 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.982 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:45.982 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:45.982 true 00:09:45.982 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:45.982 23:35:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.241 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.528 [2024-07-15 23:35:35.231690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.231779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.231826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.231865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.231905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.231952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.231992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.232986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.233997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 [2024-07-15 23:35:35.234957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.528 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.529 [2024-07-15 23:35:35.235000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.235986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.236987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.237997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.238981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.529 [2024-07-15 23:35:35.239849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.239887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.239924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.239962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.239998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.240969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.241985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.242963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.243966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.244985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.245034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.530 [2024-07-15 23:35:35.245076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.245980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.246719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.247968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.248998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.249754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.250691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.250745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.250791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.250839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.531 [2024-07-15 23:35:35.250882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.250928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.250969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.251970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.252972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.253990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.532 [2024-07-15 23:35:35.254717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.254757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.254803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.254848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.254888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.254929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.254967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.255976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.256543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.257981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.258995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.533 [2024-07-15 23:35:35.259834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.259874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.260967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.261991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.262970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:46.534 [2024-07-15 23:35:35.263012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.263054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.263089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.263128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.263169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.263214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:46.534 [2024-07-15 23:35:35.264039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.264962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.534 [2024-07-15 23:35:35.265662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.265978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.266976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.267996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.268968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.269699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.270960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.271003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.271050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.271096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.535 [2024-07-15 23:35:35.271143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.271973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.272966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.273979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.274953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.536 [2024-07-15 23:35:35.275700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.275741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.275775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.275819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.275857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.275892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.275935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.275980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.276990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.277962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.278988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.279975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.537 [2024-07-15 23:35:35.280876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.280921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.280967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.281953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.282960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.283816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.284969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.285978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.538 [2024-07-15 23:35:35.286447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.286977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.539 [2024-07-15 23:35:35.287558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.287979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.288966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.289988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.290033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.290071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.290119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.290161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.290206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.290248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.290293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.291962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.292010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.292064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.292107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.539 [2024-07-15 23:35:35.292158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.292982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.293945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.294881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.295960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.296985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.297021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.297056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.297100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.297144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.297186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.297233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.540 [2024-07-15 23:35:35.297275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.297971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.298987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.299992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.300947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.541 [2024-07-15 23:35:35.301956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.301999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.302970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.303992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.304982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.305957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.542 [2024-07-15 23:35:35.306970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.307660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.308979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.309982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.310955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.311777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.311828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.311874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.311918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.311966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.543 [2024-07-15 23:35:35.312799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.312843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.312886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.312933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.312975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.313971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.314952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.315980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.316978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.544 [2024-07-15 23:35:35.317945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.317993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.318967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.319977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.320969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.321516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.545 [2024-07-15 23:35:35.322433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.322985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.323966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.324865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.325967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.326993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.327032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.327074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.327118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.327158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.327200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.327245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.546 [2024-07-15 23:35:35.327283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.327905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.328979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.329968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.330993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.331034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.331075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.331897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.331952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.331994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.547 [2024-07-15 23:35:35.332962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.333975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.334969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.335999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.336955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 [2024-07-15 23:35:35.337779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.548 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.548 [2024-07-15 23:35:35.338286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.338956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.339976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.340994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.341942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.342266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.342310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.342358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.342395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.342432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.549 [2024-07-15 23:35:35.342473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.342976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.343986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.344360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.345983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.346973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.550 [2024-07-15 23:35:35.347784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.347831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.347886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.347935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.348955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.349977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.350802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.351995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.551 [2024-07-15 23:35:35.352512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.352976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.353994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.354974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.355971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.356955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.357003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.357046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.552 [2024-07-15 23:35:35.357087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.357486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.358960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.359983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.360970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.361983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.553 [2024-07-15 23:35:35.362895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.362943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.362991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.363973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.364014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.364062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.364104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.364147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.364188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.364220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.364265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.365998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.554 [2024-07-15 23:35:35.366976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.367976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.368913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.369980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.370962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.555 [2024-07-15 23:35:35.371810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.371857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.371906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.371953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.372999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.373993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.374995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.375036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.375082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.375905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.375953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.376982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.377022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.377066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.377124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.377171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.377212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.377258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.556 [2024-07-15 23:35:35.377301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.377984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.378973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.379696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.380995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.381956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.557 [2024-07-15 23:35:35.382617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.382662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.382707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.382756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.382810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.382855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.382904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.382952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.383990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.384990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.385035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.385830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.385880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.385935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.385977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.558 [2024-07-15 23:35:35.386400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.386957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.387968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.388974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.559 [2024-07-15 23:35:35.389257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.389586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.390970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.559 [2024-07-15 23:35:35.391317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.391973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.392939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.393991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.394988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.395817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.396644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.396699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.396744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.396789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.396835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.396886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.560 [2024-07-15 23:35:35.396930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.396982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.397957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.398983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.399978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.400974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.401973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.402017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.402058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.402097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.402137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.402180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.402221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.561 [2024-07-15 23:35:35.402268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.402959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.403956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.404977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.405948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.406986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.562 [2024-07-15 23:35:35.407752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.407790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.407830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.407870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.407910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.407947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.407996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.408988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.409985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.410993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.563 [2024-07-15 23:35:35.411799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.411843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.411899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.411945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.411994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.412984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.413998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.414970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.415752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.564 [2024-07-15 23:35:35.416947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.416990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.417975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.418987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.419022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.419868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.419914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.419954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.419994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.420959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.565 [2024-07-15 23:35:35.421666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.421718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.421765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.421811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.421851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.421898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.421933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.421972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.422957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.423998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.424998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.425731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.426963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.566 [2024-07-15 23:35:35.427351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.427960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.428986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.429030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.429072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.429919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.429961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.430979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.431967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.432008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.432046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.432087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.432129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.432167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.432210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.567 [2024-07-15 23:35:35.432258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.432965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.433570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.434993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.435979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.436934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.437982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.438030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.438064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.438104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.438146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.568 [2024-07-15 23:35:35.438189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.438967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.439836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.440992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 true 00:09:46.569 [2024-07-15 23:35:35.441379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.441988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.442977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.569 [2024-07-15 23:35:35.443752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.443988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.444032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.444074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.444110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.444148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.569 [2024-07-15 23:35:35.444188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.444965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.445994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.446502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.447998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.448045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.448092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.448140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.448185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.448232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.448281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.570 [2024-07-15 23:35:35.448328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.448977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.449988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.450989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.451996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.571 [2024-07-15 23:35:35.452465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.452991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.453965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.454991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.572 [2024-07-15 23:35:35.455635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.455970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.456518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.457978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.458964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.573 [2024-07-15 23:35:35.459977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.460962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.461971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.462991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.463978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.464027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.574 [2024-07-15 23:35:35.464074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.464979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:46.575 [2024-07-15 23:35:35.465793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.465963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.575 [2024-07-15 23:35:35.466148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.466954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.467423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.575 [2024-07-15 23:35:35.468502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.468979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.469964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.470902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.471981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.472029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.472075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.472122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.472177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.472220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.472270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.576 [2024-07-15 23:35:35.472322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.472976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.577 [2024-07-15 23:35:35.473451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.473915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.474988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.475981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.476963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.477012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.477056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.477101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.855 [2024-07-15 23:35:35.477144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.477968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.478961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.479991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.480975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.481982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.856 [2024-07-15 23:35:35.482600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.482997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.483747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.484991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.485966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.486964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.857 [2024-07-15 23:35:35.487917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.487961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.488994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.489989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.490963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.491978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.492982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.493025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.493061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.493100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.493139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.493180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.493219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.858 [2024-07-15 23:35:35.493265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.493965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.859 [2024-07-15 23:35:35.494857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.494960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.495987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.496992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.859 [2024-07-15 23:35:35.497466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.497514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.497561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.497617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.497799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.498975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.499986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.500941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.501999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.502986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.503033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.503074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.503115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.860 [2024-07-15 23:35:35.503163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.503954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.504976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.505967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.506958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.507532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.861 [2024-07-15 23:35:35.508783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.508822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.508863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.508906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.508947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.508988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.509959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.510808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.511963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.512958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.513982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.514029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.514073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.862 [2024-07-15 23:35:35.514125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.514979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.515975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.516967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.517981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.863 [2024-07-15 23:35:35.518856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.518902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.518945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.518984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.519959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.520613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.521961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.522970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.523953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.524008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.524057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.524106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.524154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.524203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.864 [2024-07-15 23:35:35.524248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.524951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.525959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.526963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.527013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.527061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.527104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.527154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.527964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.528981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.865 [2024-07-15 23:35:35.529886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.529930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.529971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.530973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.531686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.532976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.533969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.534956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.535003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.535190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.535239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.535284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.535334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.866 [2024-07-15 23:35:35.535379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.535970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.536980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.537902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.538972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.867 [2024-07-15 23:35:35.539961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.540972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.541972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.542974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.543979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.544511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.868 [2024-07-15 23:35:35.545337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.868 [2024-07-15 23:35:35.545676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.545719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.545765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.545806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.545853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.545898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.545940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.545976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.546975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.547980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.548959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.549992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.869 [2024-07-15 23:35:35.550387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.550971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.551019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.551071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.551885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.551933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.551979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.552965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.553998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.554978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.555600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.870 [2024-07-15 23:35:35.556441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.556979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.557962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.558982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.559966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.560936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.561689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.561737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.561782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.561825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.561870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.871 [2024-07-15 23:35:35.561905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.561946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.561981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.562960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.563978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.564997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.565990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.872 [2024-07-15 23:35:35.566420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.566977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.567964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.568995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.569977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.570982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.873 [2024-07-15 23:35:35.571448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.571486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.571536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.571581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.571630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.571675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.571722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.571771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.572979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.573998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.574999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.575996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.874 [2024-07-15 23:35:35.576983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.577983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.578372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.579967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.580974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.581983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.875 [2024-07-15 23:35:35.582710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.582751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.582791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.582834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.582874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.582916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.582949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.582989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.583997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.584846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.585976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.586966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.587010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.587058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.587106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.587150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.876 [2024-07-15 23:35:35.587199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.587996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.588484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.589972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.590978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.591818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.877 [2024-07-15 23:35:35.592810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.592855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.592908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.592955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.593947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.594916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.595999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.596977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.597982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.598033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.598087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.598134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.598180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.878 [2024-07-15 23:35:35.598229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:46.879 [2024-07-15 23:35:35.598637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.598967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.599985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.600969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.601977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.602987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.879 [2024-07-15 23:35:35.603702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.603736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.603780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.603819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.603856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.603894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.603935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.603970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.604996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.605594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.606973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.607968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.608018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.608064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.608110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.608157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.880 [2024-07-15 23:35:35.608203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.608964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.609973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.610988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.611969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.612007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.612052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.612096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.612139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.612176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.612217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.612263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.613996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.614045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.614090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.614136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.881 [2024-07-15 23:35:35.614184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.614974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.615959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.616974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.617988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.618914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.619744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.619795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.882 [2024-07-15 23:35:35.619839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.619883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.619938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.619982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.620980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.621970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.622991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.623988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.624962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.625000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.625040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.625083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.883 [2024-07-15 23:35:35.625123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.625987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.626976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.627968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.628897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.629705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.629757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.629809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.629856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.629901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.629944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.629990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.630034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.630081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.884 [2024-07-15 23:35:35.630126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.630988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.631960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.632990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.633984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.634023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.634067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.634104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 [2024-07-15 23:35:35.634140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:46.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.885 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:46.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.170 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.170 [2024-07-15 23:35:35.840666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.840725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.840768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.840819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.840858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.840899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.840941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.840980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.841993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.170 [2024-07-15 23:35:35.842912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.842950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.842986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.843405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.844986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.845960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.846967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.847992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.171 [2024-07-15 23:35:35.848400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.848979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.849880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.850992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.851988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.852981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.853025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.853078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.853113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.853153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.853986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.854033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.854074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.854108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.854150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.854187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.172 [2024-07-15 23:35:35.854229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.854990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.855997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.856984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.857981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.858997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.173 [2024-07-15 23:35:35.859423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.859994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.860988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.861991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.862987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.174 [2024-07-15 23:35:35.863382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.863799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.864753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.864808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.864854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.864904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.864954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.864998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.865987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.866960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.867999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.868958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.175 [2024-07-15 23:35:35.869634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.869690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.869736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.869775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.869814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.869856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:47.176 [2024-07-15 23:35:35.869897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.869942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.869984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 23:35:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:47.176 [2024-07-15 23:35:35.870300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.870984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.871964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.872996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.873980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.176 [2024-07-15 23:35:35.874354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.874689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 Message suppressed 999 times: [2024-07-15 23:35:35.875265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 Read completed with error (sct=0, sc=15) 00:09:47.177 [2024-07-15 23:35:35.875315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.875963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.876969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.877986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.878977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.879966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.177 [2024-07-15 23:35:35.880008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.880955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.881984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882834] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.882970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.883973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.178 [2024-07-15 23:35:35.884436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.884477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.884522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.884562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.885971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.886971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.887836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.888960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.179 [2024-07-15 23:35:35.889990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.890955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.891966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.892965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.893992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.894958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.180 [2024-07-15 23:35:35.895566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.895980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.896971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.897896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.898991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.899989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.181 [2024-07-15 23:35:35.900813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.900858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.900897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.900936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.900978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.901955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.902969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.903961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.904991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.905996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.182 [2024-07-15 23:35:35.906588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.906979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.907773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.908980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.909992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.910961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.911005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.183 [2024-07-15 23:35:35.911046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.911087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.911955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.912955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.913982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.914968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.915992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.184 [2024-07-15 23:35:35.916788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.916825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.916866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.916906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.916946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.916985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.917795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.918694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.918747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.918796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.918841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.918890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.918938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.918980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.919962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.920995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.921960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.185 [2024-07-15 23:35:35.922964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.923999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.924965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.925979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.926957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.186 [2024-07-15 23:35:35.927809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.927856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.927902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.927950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.927990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.928526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:47.187 [2024-07-15 23:35:35.929331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.187 [2024-07-15 23:35:35.929874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.929919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.929962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.930977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.931983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.932989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.933966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.188 [2024-07-15 23:35:35.934975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.935960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.936994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.937971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.938957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.939965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.940008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.940059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.940103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.940151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.940196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.189 [2024-07-15 23:35:35.940245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.940964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.941963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.942948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.943978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.944994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.190 [2024-07-15 23:35:35.945319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.945675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.946994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.947973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.948961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.949958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.950976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.191 [2024-07-15 23:35:35.951015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.951973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.952982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.953980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.954962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.192 [2024-07-15 23:35:35.955419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.955461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.955506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.955564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.956959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.957986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.958811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.959961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.960973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.961014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.961065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.961105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.193 [2024-07-15 23:35:35.961141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.961999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.962995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.963987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.964982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.965984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.194 [2024-07-15 23:35:35.966736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.966795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.966840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.966884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.966931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.966982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.967959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.968715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.969989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.970990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.195 [2024-07-15 23:35:35.971869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.971910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.971951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.971993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.972989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.973992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.974971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.975989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.196 [2024-07-15 23:35:35.976922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.976980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.977972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.978683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:47.197 [2024-07-15 23:35:35.979212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.979997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.980988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.981966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.982010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.982051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.197 [2024-07-15 23:35:35.982093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.982964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.983985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.984988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.985492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.986994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987832] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.198 [2024-07-15 23:35:35.987962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.988895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.989991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.990993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.991984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.992958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.993954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.199 [2024-07-15 23:35:35.994003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.994990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.995670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.996953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.997989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.998984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.999023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.999069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.999110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.999152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.999192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.200 [2024-07-15 23:35:35.999242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:35.999969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.000965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.001995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.002366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.003992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.004974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.201 [2024-07-15 23:35:36.005402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.005980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.006951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.007982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.008977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.009994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.010961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.202 [2024-07-15 23:35:36.011335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.011973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.012959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.013953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.014992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015833] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.015960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.016692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.017048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.017099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.017147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.017193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.017242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.017289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.203 [2024-07-15 23:35:36.017336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.017988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.018963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.019808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.020963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.021981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.204 [2024-07-15 23:35:36.022435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.022962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.023997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.024994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.025957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.026986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.027974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.205 [2024-07-15 23:35:36.028911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.028953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.028989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.029779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:09:47.206 [2024-07-15 23:35:36.030474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.030982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.031972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.032982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.033992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.034989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.035030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.035071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.035115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.035161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.035207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.035262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.206 [2024-07-15 23:35:36.035308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.035959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.036603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.037984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.038974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.039965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 [2024-07-15 23:35:36.040024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:09:47.207 true 00:09:47.207 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:47.207 23:35:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.145 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.406 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:48.406 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:48.724 true 00:09:48.724 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:48.725 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.725 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.984 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:48.984 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:49.244 true 00:09:49.244 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:49.244 23:35:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.244 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.502 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:49.502 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:49.760 true 00:09:49.760 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:49.760 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.760 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.019 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:50.019 23:35:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:50.279 true 00:09:50.279 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:50.279 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.279 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.538 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:50.538 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:50.797 true 00:09:50.797 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:50.797 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.056 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.056 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:51.056 23:35:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:51.315 true 00:09:51.315 23:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:51.315 23:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.574 23:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.832 23:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:51.832 23:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:51.832 true 00:09:51.832 23:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:51.832 23:35:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.769 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:52.769 23:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.028 23:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:53.028 23:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:53.028 true 00:09:53.028 23:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:53.028 23:35:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.966 23:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.966 23:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:53.966 23:35:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:54.225 true 00:09:54.225 23:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:54.225 23:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.493 23:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.754 23:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:54.754 23:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:54.754 true 00:09:54.754 23:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:54.754 23:35:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.130 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:56.130 23:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.130 23:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:56.130 23:35:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:56.390 true 00:09:56.390 23:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:56.390 23:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.649 23:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.650 23:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:56.650 23:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:56.909 true 00:09:56.909 23:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:56.909 23:35:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.288 23:35:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:58.288 23:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:58.288 23:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:58.288 true 00:09:58.548 23:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:58.548 23:35:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.115 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.373 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:59.373 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:59.632 true 00:09:59.632 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:09:59.632 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.889 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.889 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:09:59.889 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:00.147 true 00:10:00.147 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:10:00.147 23:35:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.136 23:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.394 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:01.394 23:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:01.394 23:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:01.652 true 00:10:01.652 23:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:10:01.652 23:35:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.585 23:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.585 23:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:02.585 23:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:02.844 true 00:10:02.844 23:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:10:02.844 23:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.102 23:35:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.102 23:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:03.102 23:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:03.360 true 00:10:03.360 23:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:10:03.360 23:35:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.737 Initializing NVMe Controllers 00:10:04.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.737 Controller IO queue size 128, less than required. 00:10:04.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:04.737 Controller IO queue size 128, less than required. 00:10:04.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:04.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:04.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:04.737 Initialization complete. Launching workers. 00:10:04.737 ======================================================== 00:10:04.737 Latency(us) 00:10:04.737 Device Information : IOPS MiB/s Average min max 00:10:04.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2257.63 1.10 32313.00 1529.93 1220648.14 00:10:04.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13915.60 6.79 9198.89 1315.03 309333.88 00:10:04.738 ======================================================== 00:10:04.738 Total : 16173.23 7.90 12425.40 1315.03 1220648.14 00:10:04.738 00:10:04.738 23:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.738 23:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:04.738 23:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:04.738 true 00:10:04.738 23:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 896155 00:10:04.738 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (896155) - No such process 00:10:04.738 23:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 896155 00:10:04.738 23:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.996 23:35:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:05.255 null0 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.255 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:05.513 null1 00:10:05.513 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.513 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.513 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:05.770 null2 00:10:05.770 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.770 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.770 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:05.770 null3 00:10:05.770 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:05.770 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:05.770 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:06.028 null4 00:10:06.028 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.028 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.028 23:35:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:06.286 null5 00:10:06.286 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.286 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.286 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:06.547 null6 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:06.547 null7 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 901777 901778 901781 901783 901784 901786 901788 901790 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.547 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.868 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.127 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.128 23:35:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.128 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.387 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.646 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.646 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.646 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.646 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.646 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:07.647 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.647 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.647 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:07.906 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.165 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.165 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.425 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:08.683 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:08.941 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.200 23:35:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.200 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.458 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:09.717 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:09.976 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.234 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.235 23:35:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:10.235 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.493 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.494 rmmod nvme_tcp 00:10:10.494 rmmod nvme_fabrics 00:10:10.494 rmmod nvme_keyring 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 895691 ']' 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 895691 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@942 -- # '[' -z 895691 ']' 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # kill -0 895691 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # uname 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 895691 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 895691' 00:10:10.494 killing process with pid 895691 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@961 -- # kill 895691 00:10:10.494 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # wait 895691 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:10.753 23:35:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.292 23:36:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:13.292 00:10:13.292 real 0m47.287s 00:10:13.292 user 3m13.114s 00:10:13.292 sys 0m14.662s 00:10:13.292 23:36:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:13.292 23:36:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.292 ************************************ 00:10:13.292 END TEST nvmf_ns_hotplug_stress 00:10:13.292 ************************************ 00:10:13.292 23:36:01 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:13.292 23:36:01 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:13.292 23:36:01 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:13.292 23:36:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:13.292 23:36:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:13.292 ************************************ 00:10:13.292 START TEST nvmf_connect_stress 00:10:13.292 ************************************ 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:13.292 * Looking for test storage... 00:10:13.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.292 23:36:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:18.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.564 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:18.565 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:18.565 Found net devices under 0000:86:00.0: cvl_0_0 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:18.565 Found net devices under 0000:86:00.1: cvl_0_1 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:18.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:10:18.565 00:10:18.565 --- 10.0.0.2 ping statistics --- 00:10:18.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.565 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:10:18.565 00:10:18.565 --- 10.0.0.1 ping statistics --- 00:10:18.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.565 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=906052 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 906052 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@823 -- # '[' -z 906052 ']' 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:18.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:18.565 23:36:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.565 [2024-07-15 23:36:06.940437] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:10:18.565 [2024-07-15 23:36:06.940480] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.565 [2024-07-15 23:36:06.997394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:18.565 [2024-07-15 23:36:07.076794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.565 [2024-07-15 23:36:07.076828] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.565 [2024-07-15 23:36:07.076835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.565 [2024-07-15 23:36:07.076841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.565 [2024-07-15 23:36:07.076846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.565 [2024-07-15 23:36:07.076882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.565 [2024-07-15 23:36:07.076968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.565 [2024-07-15 23:36:07.076969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # return 0 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:18.824 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.083 [2024-07-15 23:36:07.797873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.083 [2024-07-15 23:36:07.825330] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.083 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.084 NULL1 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=906297 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:19.084 23:36:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.343 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:19.343 23:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:19.343 23:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.343 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:19.343 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.655 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:19.655 23:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:19.655 23:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:19.655 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:19.655 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.222 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:20.222 23:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:20.222 23:36:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.222 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:20.222 23:36:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.479 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:20.479 23:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:20.479 23:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.479 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:20.479 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.737 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:20.737 23:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:20.737 23:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.737 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:20.737 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:20.996 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:20.996 23:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:20.996 23:36:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:20.996 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:20.996 23:36:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.254 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:21.254 23:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:21.254 23:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.254 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:21.254 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:21.823 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:21.823 23:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:21.823 23:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:21.823 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:21.823 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.082 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:22.082 23:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:22.082 23:36:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.082 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:22.082 23:36:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.341 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:22.341 23:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:22.341 23:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.341 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:22.341 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.601 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:22.601 23:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:22.601 23:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.601 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:22.601 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:22.860 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:22.860 23:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:22.860 23:36:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:22.860 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:22.860 23:36:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.445 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:23.445 23:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:23.445 23:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.445 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:23.445 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:23.773 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:23.773 23:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:23.773 23:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:23.773 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:23.773 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.033 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:24.033 23:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:24.033 23:36:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.033 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:24.033 23:36:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.292 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:24.292 23:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:24.293 23:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.293 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:24.293 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.551 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:24.551 23:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:24.551 23:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.551 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:24.551 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:24.809 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:24.809 23:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:24.809 23:36:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:24.809 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:24.809 23:36:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.377 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.377 23:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:25.377 23:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.377 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.377 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.636 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.636 23:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:25.636 23:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.636 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.636 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:25.895 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:25.895 23:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:25.895 23:36:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:25.895 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:25.895 23:36:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.155 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:26.155 23:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:26.155 23:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.155 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:26.155 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.721 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:26.721 23:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:26.721 23:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.721 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:26.721 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:26.979 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:26.979 23:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:26.979 23:36:15 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:26.979 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:26.979 23:36:15 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.237 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:27.237 23:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:27.237 23:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.237 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:27.237 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.496 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:27.496 23:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:27.496 23:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.496 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:27.496 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:27.754 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:27.754 23:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:27.754 23:36:16 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:27.754 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:27.754 23:36:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.322 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:28.322 23:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:28.322 23:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.322 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:28.322 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.581 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:28.581 23:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:28.581 23:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.581 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:28.581 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.839 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:28.839 23:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:28.839 23:36:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:28.839 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:28.839 23:36:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.098 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 906297 00:10:29.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (906297) - No such process 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 906297 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.098 rmmod nvme_tcp 00:10:29.098 rmmod nvme_fabrics 00:10:29.098 rmmod nvme_keyring 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 906052 ']' 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 906052 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@942 -- # '[' -z 906052 ']' 00:10:29.098 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # kill -0 906052 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # uname 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 906052 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 906052' 00:10:29.357 killing process with pid 906052 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@961 -- # kill 906052 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # wait 906052 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.357 23:36:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.889 23:36:20 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.889 00:10:31.889 real 0m18.628s 00:10:31.889 user 0m40.862s 00:10:31.889 sys 0m7.773s 00:10:31.889 23:36:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:31.889 23:36:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:31.889 ************************************ 00:10:31.889 END TEST nvmf_connect_stress 00:10:31.889 ************************************ 00:10:31.889 23:36:20 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:31.889 23:36:20 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:31.889 23:36:20 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:31.889 23:36:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:31.889 23:36:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.889 ************************************ 00:10:31.889 START TEST nvmf_fused_ordering 00:10:31.889 ************************************ 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:31.889 * Looking for test storage... 00:10:31.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.889 23:36:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:37.160 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:37.160 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:37.160 Found net devices under 0000:86:00.0: cvl_0_0 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:37.160 Found net devices under 0000:86:00.1: cvl_0_1 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.160 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:10:37.161 00:10:37.161 --- 10.0.0.2 ping statistics --- 00:10:37.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.161 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:10:37.161 00:10:37.161 --- 10.0.0.1 ping statistics --- 00:10:37.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.161 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=911831 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 911831 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@823 -- # '[' -z 911831 ']' 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.161 23:36:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:37.161 [2024-07-15 23:36:25.659191] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:10:37.161 [2024-07-15 23:36:25.659244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.161 [2024-07-15 23:36:25.715993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.161 [2024-07-15 23:36:25.794170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.161 [2024-07-15 23:36:25.794202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.161 [2024-07-15 23:36:25.794208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.161 [2024-07-15 23:36:25.794215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.161 [2024-07-15 23:36:25.794220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.161 [2024-07-15 23:36:25.794241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # return 0 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 [2024-07-15 23:36:26.492038] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 [2024-07-15 23:36:26.508179] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 NULL1 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:37.752 23:36:26 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:37.752 [2024-07-15 23:36:26.561684] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:10:37.752 [2024-07-15 23:36:26.561723] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911866 ] 00:10:38.012 Attached to nqn.2016-06.io.spdk:cnode1 00:10:38.012 Namespace ID: 1 size: 1GB 00:10:38.012 fused_ordering(0) 00:10:38.012 fused_ordering(1) 00:10:38.012 fused_ordering(2) 00:10:38.012 fused_ordering(3) 00:10:38.012 fused_ordering(4) 00:10:38.012 fused_ordering(5) 00:10:38.012 fused_ordering(6) 00:10:38.012 fused_ordering(7) 00:10:38.012 fused_ordering(8) 00:10:38.012 fused_ordering(9) 00:10:38.012 fused_ordering(10) 00:10:38.012 fused_ordering(11) 00:10:38.012 fused_ordering(12) 00:10:38.012 fused_ordering(13) 00:10:38.012 fused_ordering(14) 00:10:38.012 fused_ordering(15) 00:10:38.012 fused_ordering(16) 00:10:38.012 fused_ordering(17) 00:10:38.012 fused_ordering(18) 00:10:38.012 fused_ordering(19) 00:10:38.012 fused_ordering(20) 00:10:38.012 fused_ordering(21) 00:10:38.012 fused_ordering(22) 00:10:38.012 fused_ordering(23) 00:10:38.012 fused_ordering(24) 00:10:38.012 fused_ordering(25) 00:10:38.012 fused_ordering(26) 00:10:38.012 fused_ordering(27) 00:10:38.012 fused_ordering(28) 00:10:38.012 fused_ordering(29) 00:10:38.012 fused_ordering(30) 00:10:38.012 fused_ordering(31) 00:10:38.012 fused_ordering(32) 00:10:38.012 fused_ordering(33) 00:10:38.012 fused_ordering(34) 00:10:38.012 fused_ordering(35) 00:10:38.012 fused_ordering(36) 00:10:38.012 fused_ordering(37) 00:10:38.012 fused_ordering(38) 00:10:38.012 fused_ordering(39) 00:10:38.012 fused_ordering(40) 00:10:38.012 fused_ordering(41) 00:10:38.012 fused_ordering(42) 00:10:38.012 fused_ordering(43) 00:10:38.012 fused_ordering(44) 00:10:38.012 fused_ordering(45) 00:10:38.012 fused_ordering(46) 00:10:38.012 fused_ordering(47) 00:10:38.012 fused_ordering(48) 00:10:38.012 fused_ordering(49) 00:10:38.012 fused_ordering(50) 00:10:38.012 fused_ordering(51) 00:10:38.012 fused_ordering(52) 00:10:38.012 fused_ordering(53) 00:10:38.012 fused_ordering(54) 00:10:38.012 fused_ordering(55) 00:10:38.012 fused_ordering(56) 00:10:38.012 fused_ordering(57) 00:10:38.012 fused_ordering(58) 00:10:38.012 fused_ordering(59) 00:10:38.012 fused_ordering(60) 00:10:38.012 fused_ordering(61) 00:10:38.012 fused_ordering(62) 00:10:38.012 fused_ordering(63) 00:10:38.012 fused_ordering(64) 00:10:38.012 fused_ordering(65) 00:10:38.012 fused_ordering(66) 00:10:38.012 fused_ordering(67) 00:10:38.012 fused_ordering(68) 00:10:38.012 fused_ordering(69) 00:10:38.012 fused_ordering(70) 00:10:38.012 fused_ordering(71) 00:10:38.012 fused_ordering(72) 00:10:38.012 fused_ordering(73) 00:10:38.012 fused_ordering(74) 00:10:38.012 fused_ordering(75) 00:10:38.012 fused_ordering(76) 00:10:38.012 fused_ordering(77) 00:10:38.012 fused_ordering(78) 00:10:38.012 fused_ordering(79) 00:10:38.012 fused_ordering(80) 00:10:38.012 fused_ordering(81) 00:10:38.012 fused_ordering(82) 00:10:38.012 fused_ordering(83) 00:10:38.012 fused_ordering(84) 00:10:38.012 fused_ordering(85) 00:10:38.012 fused_ordering(86) 00:10:38.012 fused_ordering(87) 00:10:38.012 fused_ordering(88) 00:10:38.012 fused_ordering(89) 00:10:38.012 fused_ordering(90) 00:10:38.012 fused_ordering(91) 00:10:38.012 fused_ordering(92) 00:10:38.012 fused_ordering(93) 00:10:38.012 fused_ordering(94) 00:10:38.012 fused_ordering(95) 00:10:38.012 fused_ordering(96) 00:10:38.012 fused_ordering(97) 00:10:38.012 fused_ordering(98) 00:10:38.012 fused_ordering(99) 00:10:38.012 fused_ordering(100) 00:10:38.012 fused_ordering(101) 00:10:38.012 fused_ordering(102) 00:10:38.012 fused_ordering(103) 00:10:38.012 fused_ordering(104) 00:10:38.012 fused_ordering(105) 00:10:38.012 fused_ordering(106) 00:10:38.012 fused_ordering(107) 00:10:38.012 fused_ordering(108) 00:10:38.012 fused_ordering(109) 00:10:38.012 fused_ordering(110) 00:10:38.012 fused_ordering(111) 00:10:38.012 fused_ordering(112) 00:10:38.012 fused_ordering(113) 00:10:38.012 fused_ordering(114) 00:10:38.012 fused_ordering(115) 00:10:38.012 fused_ordering(116) 00:10:38.012 fused_ordering(117) 00:10:38.012 fused_ordering(118) 00:10:38.012 fused_ordering(119) 00:10:38.012 fused_ordering(120) 00:10:38.012 fused_ordering(121) 00:10:38.012 fused_ordering(122) 00:10:38.012 fused_ordering(123) 00:10:38.012 fused_ordering(124) 00:10:38.012 fused_ordering(125) 00:10:38.012 fused_ordering(126) 00:10:38.012 fused_ordering(127) 00:10:38.012 fused_ordering(128) 00:10:38.012 fused_ordering(129) 00:10:38.012 fused_ordering(130) 00:10:38.012 fused_ordering(131) 00:10:38.012 fused_ordering(132) 00:10:38.012 fused_ordering(133) 00:10:38.012 fused_ordering(134) 00:10:38.012 fused_ordering(135) 00:10:38.012 fused_ordering(136) 00:10:38.012 fused_ordering(137) 00:10:38.012 fused_ordering(138) 00:10:38.012 fused_ordering(139) 00:10:38.012 fused_ordering(140) 00:10:38.012 fused_ordering(141) 00:10:38.012 fused_ordering(142) 00:10:38.012 fused_ordering(143) 00:10:38.012 fused_ordering(144) 00:10:38.012 fused_ordering(145) 00:10:38.012 fused_ordering(146) 00:10:38.012 fused_ordering(147) 00:10:38.012 fused_ordering(148) 00:10:38.012 fused_ordering(149) 00:10:38.012 fused_ordering(150) 00:10:38.012 fused_ordering(151) 00:10:38.012 fused_ordering(152) 00:10:38.012 fused_ordering(153) 00:10:38.012 fused_ordering(154) 00:10:38.012 fused_ordering(155) 00:10:38.012 fused_ordering(156) 00:10:38.012 fused_ordering(157) 00:10:38.012 fused_ordering(158) 00:10:38.012 fused_ordering(159) 00:10:38.012 fused_ordering(160) 00:10:38.012 fused_ordering(161) 00:10:38.012 fused_ordering(162) 00:10:38.012 fused_ordering(163) 00:10:38.012 fused_ordering(164) 00:10:38.012 fused_ordering(165) 00:10:38.012 fused_ordering(166) 00:10:38.012 fused_ordering(167) 00:10:38.012 fused_ordering(168) 00:10:38.012 fused_ordering(169) 00:10:38.012 fused_ordering(170) 00:10:38.012 fused_ordering(171) 00:10:38.012 fused_ordering(172) 00:10:38.012 fused_ordering(173) 00:10:38.012 fused_ordering(174) 00:10:38.012 fused_ordering(175) 00:10:38.012 fused_ordering(176) 00:10:38.012 fused_ordering(177) 00:10:38.012 fused_ordering(178) 00:10:38.012 fused_ordering(179) 00:10:38.012 fused_ordering(180) 00:10:38.012 fused_ordering(181) 00:10:38.012 fused_ordering(182) 00:10:38.012 fused_ordering(183) 00:10:38.012 fused_ordering(184) 00:10:38.012 fused_ordering(185) 00:10:38.012 fused_ordering(186) 00:10:38.012 fused_ordering(187) 00:10:38.012 fused_ordering(188) 00:10:38.012 fused_ordering(189) 00:10:38.012 fused_ordering(190) 00:10:38.012 fused_ordering(191) 00:10:38.012 fused_ordering(192) 00:10:38.012 fused_ordering(193) 00:10:38.013 fused_ordering(194) 00:10:38.013 fused_ordering(195) 00:10:38.013 fused_ordering(196) 00:10:38.013 fused_ordering(197) 00:10:38.013 fused_ordering(198) 00:10:38.013 fused_ordering(199) 00:10:38.013 fused_ordering(200) 00:10:38.013 fused_ordering(201) 00:10:38.013 fused_ordering(202) 00:10:38.013 fused_ordering(203) 00:10:38.013 fused_ordering(204) 00:10:38.013 fused_ordering(205) 00:10:38.272 fused_ordering(206) 00:10:38.272 fused_ordering(207) 00:10:38.272 fused_ordering(208) 00:10:38.272 fused_ordering(209) 00:10:38.272 fused_ordering(210) 00:10:38.272 fused_ordering(211) 00:10:38.272 fused_ordering(212) 00:10:38.272 fused_ordering(213) 00:10:38.272 fused_ordering(214) 00:10:38.272 fused_ordering(215) 00:10:38.272 fused_ordering(216) 00:10:38.272 fused_ordering(217) 00:10:38.272 fused_ordering(218) 00:10:38.272 fused_ordering(219) 00:10:38.272 fused_ordering(220) 00:10:38.272 fused_ordering(221) 00:10:38.272 fused_ordering(222) 00:10:38.272 fused_ordering(223) 00:10:38.272 fused_ordering(224) 00:10:38.272 fused_ordering(225) 00:10:38.272 fused_ordering(226) 00:10:38.272 fused_ordering(227) 00:10:38.272 fused_ordering(228) 00:10:38.272 fused_ordering(229) 00:10:38.272 fused_ordering(230) 00:10:38.272 fused_ordering(231) 00:10:38.272 fused_ordering(232) 00:10:38.272 fused_ordering(233) 00:10:38.272 fused_ordering(234) 00:10:38.272 fused_ordering(235) 00:10:38.272 fused_ordering(236) 00:10:38.272 fused_ordering(237) 00:10:38.272 fused_ordering(238) 00:10:38.272 fused_ordering(239) 00:10:38.272 fused_ordering(240) 00:10:38.272 fused_ordering(241) 00:10:38.272 fused_ordering(242) 00:10:38.272 fused_ordering(243) 00:10:38.272 fused_ordering(244) 00:10:38.272 fused_ordering(245) 00:10:38.272 fused_ordering(246) 00:10:38.272 fused_ordering(247) 00:10:38.272 fused_ordering(248) 00:10:38.272 fused_ordering(249) 00:10:38.272 fused_ordering(250) 00:10:38.272 fused_ordering(251) 00:10:38.272 fused_ordering(252) 00:10:38.272 fused_ordering(253) 00:10:38.272 fused_ordering(254) 00:10:38.272 fused_ordering(255) 00:10:38.272 fused_ordering(256) 00:10:38.272 fused_ordering(257) 00:10:38.272 fused_ordering(258) 00:10:38.272 fused_ordering(259) 00:10:38.272 fused_ordering(260) 00:10:38.272 fused_ordering(261) 00:10:38.272 fused_ordering(262) 00:10:38.272 fused_ordering(263) 00:10:38.272 fused_ordering(264) 00:10:38.272 fused_ordering(265) 00:10:38.272 fused_ordering(266) 00:10:38.272 fused_ordering(267) 00:10:38.272 fused_ordering(268) 00:10:38.272 fused_ordering(269) 00:10:38.272 fused_ordering(270) 00:10:38.272 fused_ordering(271) 00:10:38.272 fused_ordering(272) 00:10:38.272 fused_ordering(273) 00:10:38.272 fused_ordering(274) 00:10:38.272 fused_ordering(275) 00:10:38.272 fused_ordering(276) 00:10:38.272 fused_ordering(277) 00:10:38.273 fused_ordering(278) 00:10:38.273 fused_ordering(279) 00:10:38.273 fused_ordering(280) 00:10:38.273 fused_ordering(281) 00:10:38.273 fused_ordering(282) 00:10:38.273 fused_ordering(283) 00:10:38.273 fused_ordering(284) 00:10:38.273 fused_ordering(285) 00:10:38.273 fused_ordering(286) 00:10:38.273 fused_ordering(287) 00:10:38.273 fused_ordering(288) 00:10:38.273 fused_ordering(289) 00:10:38.273 fused_ordering(290) 00:10:38.273 fused_ordering(291) 00:10:38.273 fused_ordering(292) 00:10:38.273 fused_ordering(293) 00:10:38.273 fused_ordering(294) 00:10:38.273 fused_ordering(295) 00:10:38.273 fused_ordering(296) 00:10:38.273 fused_ordering(297) 00:10:38.273 fused_ordering(298) 00:10:38.273 fused_ordering(299) 00:10:38.273 fused_ordering(300) 00:10:38.273 fused_ordering(301) 00:10:38.273 fused_ordering(302) 00:10:38.273 fused_ordering(303) 00:10:38.273 fused_ordering(304) 00:10:38.273 fused_ordering(305) 00:10:38.273 fused_ordering(306) 00:10:38.273 fused_ordering(307) 00:10:38.273 fused_ordering(308) 00:10:38.273 fused_ordering(309) 00:10:38.273 fused_ordering(310) 00:10:38.273 fused_ordering(311) 00:10:38.273 fused_ordering(312) 00:10:38.273 fused_ordering(313) 00:10:38.273 fused_ordering(314) 00:10:38.273 fused_ordering(315) 00:10:38.273 fused_ordering(316) 00:10:38.273 fused_ordering(317) 00:10:38.273 fused_ordering(318) 00:10:38.273 fused_ordering(319) 00:10:38.273 fused_ordering(320) 00:10:38.273 fused_ordering(321) 00:10:38.273 fused_ordering(322) 00:10:38.273 fused_ordering(323) 00:10:38.273 fused_ordering(324) 00:10:38.273 fused_ordering(325) 00:10:38.273 fused_ordering(326) 00:10:38.273 fused_ordering(327) 00:10:38.273 fused_ordering(328) 00:10:38.273 fused_ordering(329) 00:10:38.273 fused_ordering(330) 00:10:38.273 fused_ordering(331) 00:10:38.273 fused_ordering(332) 00:10:38.273 fused_ordering(333) 00:10:38.273 fused_ordering(334) 00:10:38.273 fused_ordering(335) 00:10:38.273 fused_ordering(336) 00:10:38.273 fused_ordering(337) 00:10:38.273 fused_ordering(338) 00:10:38.273 fused_ordering(339) 00:10:38.273 fused_ordering(340) 00:10:38.273 fused_ordering(341) 00:10:38.273 fused_ordering(342) 00:10:38.273 fused_ordering(343) 00:10:38.273 fused_ordering(344) 00:10:38.273 fused_ordering(345) 00:10:38.273 fused_ordering(346) 00:10:38.273 fused_ordering(347) 00:10:38.273 fused_ordering(348) 00:10:38.273 fused_ordering(349) 00:10:38.273 fused_ordering(350) 00:10:38.273 fused_ordering(351) 00:10:38.273 fused_ordering(352) 00:10:38.273 fused_ordering(353) 00:10:38.273 fused_ordering(354) 00:10:38.273 fused_ordering(355) 00:10:38.273 fused_ordering(356) 00:10:38.273 fused_ordering(357) 00:10:38.273 fused_ordering(358) 00:10:38.273 fused_ordering(359) 00:10:38.273 fused_ordering(360) 00:10:38.273 fused_ordering(361) 00:10:38.273 fused_ordering(362) 00:10:38.273 fused_ordering(363) 00:10:38.273 fused_ordering(364) 00:10:38.273 fused_ordering(365) 00:10:38.273 fused_ordering(366) 00:10:38.273 fused_ordering(367) 00:10:38.273 fused_ordering(368) 00:10:38.273 fused_ordering(369) 00:10:38.273 fused_ordering(370) 00:10:38.273 fused_ordering(371) 00:10:38.273 fused_ordering(372) 00:10:38.273 fused_ordering(373) 00:10:38.273 fused_ordering(374) 00:10:38.273 fused_ordering(375) 00:10:38.273 fused_ordering(376) 00:10:38.273 fused_ordering(377) 00:10:38.273 fused_ordering(378) 00:10:38.273 fused_ordering(379) 00:10:38.273 fused_ordering(380) 00:10:38.273 fused_ordering(381) 00:10:38.273 fused_ordering(382) 00:10:38.273 fused_ordering(383) 00:10:38.273 fused_ordering(384) 00:10:38.273 fused_ordering(385) 00:10:38.273 fused_ordering(386) 00:10:38.273 fused_ordering(387) 00:10:38.273 fused_ordering(388) 00:10:38.273 fused_ordering(389) 00:10:38.273 fused_ordering(390) 00:10:38.273 fused_ordering(391) 00:10:38.273 fused_ordering(392) 00:10:38.273 fused_ordering(393) 00:10:38.273 fused_ordering(394) 00:10:38.273 fused_ordering(395) 00:10:38.273 fused_ordering(396) 00:10:38.273 fused_ordering(397) 00:10:38.273 fused_ordering(398) 00:10:38.273 fused_ordering(399) 00:10:38.273 fused_ordering(400) 00:10:38.273 fused_ordering(401) 00:10:38.273 fused_ordering(402) 00:10:38.273 fused_ordering(403) 00:10:38.273 fused_ordering(404) 00:10:38.273 fused_ordering(405) 00:10:38.273 fused_ordering(406) 00:10:38.273 fused_ordering(407) 00:10:38.273 fused_ordering(408) 00:10:38.273 fused_ordering(409) 00:10:38.273 fused_ordering(410) 00:10:38.840 fused_ordering(411) 00:10:38.840 fused_ordering(412) 00:10:38.840 fused_ordering(413) 00:10:38.840 fused_ordering(414) 00:10:38.840 fused_ordering(415) 00:10:38.840 fused_ordering(416) 00:10:38.840 fused_ordering(417) 00:10:38.840 fused_ordering(418) 00:10:38.840 fused_ordering(419) 00:10:38.840 fused_ordering(420) 00:10:38.840 fused_ordering(421) 00:10:38.840 fused_ordering(422) 00:10:38.840 fused_ordering(423) 00:10:38.840 fused_ordering(424) 00:10:38.840 fused_ordering(425) 00:10:38.840 fused_ordering(426) 00:10:38.840 fused_ordering(427) 00:10:38.840 fused_ordering(428) 00:10:38.840 fused_ordering(429) 00:10:38.840 fused_ordering(430) 00:10:38.840 fused_ordering(431) 00:10:38.840 fused_ordering(432) 00:10:38.840 fused_ordering(433) 00:10:38.840 fused_ordering(434) 00:10:38.840 fused_ordering(435) 00:10:38.840 fused_ordering(436) 00:10:38.840 fused_ordering(437) 00:10:38.840 fused_ordering(438) 00:10:38.840 fused_ordering(439) 00:10:38.840 fused_ordering(440) 00:10:38.840 fused_ordering(441) 00:10:38.840 fused_ordering(442) 00:10:38.840 fused_ordering(443) 00:10:38.840 fused_ordering(444) 00:10:38.840 fused_ordering(445) 00:10:38.840 fused_ordering(446) 00:10:38.840 fused_ordering(447) 00:10:38.840 fused_ordering(448) 00:10:38.840 fused_ordering(449) 00:10:38.840 fused_ordering(450) 00:10:38.840 fused_ordering(451) 00:10:38.840 fused_ordering(452) 00:10:38.840 fused_ordering(453) 00:10:38.840 fused_ordering(454) 00:10:38.840 fused_ordering(455) 00:10:38.840 fused_ordering(456) 00:10:38.840 fused_ordering(457) 00:10:38.840 fused_ordering(458) 00:10:38.840 fused_ordering(459) 00:10:38.840 fused_ordering(460) 00:10:38.840 fused_ordering(461) 00:10:38.840 fused_ordering(462) 00:10:38.840 fused_ordering(463) 00:10:38.840 fused_ordering(464) 00:10:38.840 fused_ordering(465) 00:10:38.840 fused_ordering(466) 00:10:38.840 fused_ordering(467) 00:10:38.840 fused_ordering(468) 00:10:38.840 fused_ordering(469) 00:10:38.840 fused_ordering(470) 00:10:38.840 fused_ordering(471) 00:10:38.840 fused_ordering(472) 00:10:38.840 fused_ordering(473) 00:10:38.840 fused_ordering(474) 00:10:38.840 fused_ordering(475) 00:10:38.840 fused_ordering(476) 00:10:38.840 fused_ordering(477) 00:10:38.840 fused_ordering(478) 00:10:38.840 fused_ordering(479) 00:10:38.840 fused_ordering(480) 00:10:38.840 fused_ordering(481) 00:10:38.840 fused_ordering(482) 00:10:38.840 fused_ordering(483) 00:10:38.840 fused_ordering(484) 00:10:38.840 fused_ordering(485) 00:10:38.840 fused_ordering(486) 00:10:38.840 fused_ordering(487) 00:10:38.840 fused_ordering(488) 00:10:38.840 fused_ordering(489) 00:10:38.840 fused_ordering(490) 00:10:38.840 fused_ordering(491) 00:10:38.840 fused_ordering(492) 00:10:38.840 fused_ordering(493) 00:10:38.840 fused_ordering(494) 00:10:38.840 fused_ordering(495) 00:10:38.840 fused_ordering(496) 00:10:38.840 fused_ordering(497) 00:10:38.840 fused_ordering(498) 00:10:38.840 fused_ordering(499) 00:10:38.840 fused_ordering(500) 00:10:38.840 fused_ordering(501) 00:10:38.840 fused_ordering(502) 00:10:38.840 fused_ordering(503) 00:10:38.840 fused_ordering(504) 00:10:38.840 fused_ordering(505) 00:10:38.840 fused_ordering(506) 00:10:38.840 fused_ordering(507) 00:10:38.840 fused_ordering(508) 00:10:38.840 fused_ordering(509) 00:10:38.840 fused_ordering(510) 00:10:38.840 fused_ordering(511) 00:10:38.840 fused_ordering(512) 00:10:38.840 fused_ordering(513) 00:10:38.840 fused_ordering(514) 00:10:38.840 fused_ordering(515) 00:10:38.840 fused_ordering(516) 00:10:38.840 fused_ordering(517) 00:10:38.840 fused_ordering(518) 00:10:38.840 fused_ordering(519) 00:10:38.840 fused_ordering(520) 00:10:38.840 fused_ordering(521) 00:10:38.840 fused_ordering(522) 00:10:38.840 fused_ordering(523) 00:10:38.840 fused_ordering(524) 00:10:38.840 fused_ordering(525) 00:10:38.840 fused_ordering(526) 00:10:38.840 fused_ordering(527) 00:10:38.840 fused_ordering(528) 00:10:38.840 fused_ordering(529) 00:10:38.840 fused_ordering(530) 00:10:38.840 fused_ordering(531) 00:10:38.840 fused_ordering(532) 00:10:38.840 fused_ordering(533) 00:10:38.840 fused_ordering(534) 00:10:38.840 fused_ordering(535) 00:10:38.840 fused_ordering(536) 00:10:38.840 fused_ordering(537) 00:10:38.840 fused_ordering(538) 00:10:38.840 fused_ordering(539) 00:10:38.840 fused_ordering(540) 00:10:38.840 fused_ordering(541) 00:10:38.840 fused_ordering(542) 00:10:38.840 fused_ordering(543) 00:10:38.840 fused_ordering(544) 00:10:38.840 fused_ordering(545) 00:10:38.840 fused_ordering(546) 00:10:38.840 fused_ordering(547) 00:10:38.840 fused_ordering(548) 00:10:38.840 fused_ordering(549) 00:10:38.840 fused_ordering(550) 00:10:38.840 fused_ordering(551) 00:10:38.840 fused_ordering(552) 00:10:38.840 fused_ordering(553) 00:10:38.840 fused_ordering(554) 00:10:38.840 fused_ordering(555) 00:10:38.840 fused_ordering(556) 00:10:38.840 fused_ordering(557) 00:10:38.840 fused_ordering(558) 00:10:38.840 fused_ordering(559) 00:10:38.840 fused_ordering(560) 00:10:38.840 fused_ordering(561) 00:10:38.840 fused_ordering(562) 00:10:38.840 fused_ordering(563) 00:10:38.840 fused_ordering(564) 00:10:38.840 fused_ordering(565) 00:10:38.840 fused_ordering(566) 00:10:38.840 fused_ordering(567) 00:10:38.840 fused_ordering(568) 00:10:38.840 fused_ordering(569) 00:10:38.840 fused_ordering(570) 00:10:38.840 fused_ordering(571) 00:10:38.840 fused_ordering(572) 00:10:38.840 fused_ordering(573) 00:10:38.840 fused_ordering(574) 00:10:38.840 fused_ordering(575) 00:10:38.840 fused_ordering(576) 00:10:38.840 fused_ordering(577) 00:10:38.840 fused_ordering(578) 00:10:38.840 fused_ordering(579) 00:10:38.840 fused_ordering(580) 00:10:38.840 fused_ordering(581) 00:10:38.841 fused_ordering(582) 00:10:38.841 fused_ordering(583) 00:10:38.841 fused_ordering(584) 00:10:38.841 fused_ordering(585) 00:10:38.841 fused_ordering(586) 00:10:38.841 fused_ordering(587) 00:10:38.841 fused_ordering(588) 00:10:38.841 fused_ordering(589) 00:10:38.841 fused_ordering(590) 00:10:38.841 fused_ordering(591) 00:10:38.841 fused_ordering(592) 00:10:38.841 fused_ordering(593) 00:10:38.841 fused_ordering(594) 00:10:38.841 fused_ordering(595) 00:10:38.841 fused_ordering(596) 00:10:38.841 fused_ordering(597) 00:10:38.841 fused_ordering(598) 00:10:38.841 fused_ordering(599) 00:10:38.841 fused_ordering(600) 00:10:38.841 fused_ordering(601) 00:10:38.841 fused_ordering(602) 00:10:38.841 fused_ordering(603) 00:10:38.841 fused_ordering(604) 00:10:38.841 fused_ordering(605) 00:10:38.841 fused_ordering(606) 00:10:38.841 fused_ordering(607) 00:10:38.841 fused_ordering(608) 00:10:38.841 fused_ordering(609) 00:10:38.841 fused_ordering(610) 00:10:38.841 fused_ordering(611) 00:10:38.841 fused_ordering(612) 00:10:38.841 fused_ordering(613) 00:10:38.841 fused_ordering(614) 00:10:38.841 fused_ordering(615) 00:10:39.102 fused_ordering(616) 00:10:39.102 fused_ordering(617) 00:10:39.102 fused_ordering(618) 00:10:39.102 fused_ordering(619) 00:10:39.102 fused_ordering(620) 00:10:39.102 fused_ordering(621) 00:10:39.102 fused_ordering(622) 00:10:39.102 fused_ordering(623) 00:10:39.102 fused_ordering(624) 00:10:39.102 fused_ordering(625) 00:10:39.102 fused_ordering(626) 00:10:39.102 fused_ordering(627) 00:10:39.102 fused_ordering(628) 00:10:39.102 fused_ordering(629) 00:10:39.102 fused_ordering(630) 00:10:39.102 fused_ordering(631) 00:10:39.102 fused_ordering(632) 00:10:39.102 fused_ordering(633) 00:10:39.102 fused_ordering(634) 00:10:39.102 fused_ordering(635) 00:10:39.102 fused_ordering(636) 00:10:39.102 fused_ordering(637) 00:10:39.102 fused_ordering(638) 00:10:39.102 fused_ordering(639) 00:10:39.102 fused_ordering(640) 00:10:39.102 fused_ordering(641) 00:10:39.102 fused_ordering(642) 00:10:39.102 fused_ordering(643) 00:10:39.102 fused_ordering(644) 00:10:39.102 fused_ordering(645) 00:10:39.102 fused_ordering(646) 00:10:39.102 fused_ordering(647) 00:10:39.102 fused_ordering(648) 00:10:39.102 fused_ordering(649) 00:10:39.102 fused_ordering(650) 00:10:39.102 fused_ordering(651) 00:10:39.102 fused_ordering(652) 00:10:39.102 fused_ordering(653) 00:10:39.102 fused_ordering(654) 00:10:39.102 fused_ordering(655) 00:10:39.102 fused_ordering(656) 00:10:39.102 fused_ordering(657) 00:10:39.102 fused_ordering(658) 00:10:39.102 fused_ordering(659) 00:10:39.102 fused_ordering(660) 00:10:39.102 fused_ordering(661) 00:10:39.102 fused_ordering(662) 00:10:39.102 fused_ordering(663) 00:10:39.102 fused_ordering(664) 00:10:39.102 fused_ordering(665) 00:10:39.102 fused_ordering(666) 00:10:39.102 fused_ordering(667) 00:10:39.102 fused_ordering(668) 00:10:39.102 fused_ordering(669) 00:10:39.102 fused_ordering(670) 00:10:39.102 fused_ordering(671) 00:10:39.102 fused_ordering(672) 00:10:39.102 fused_ordering(673) 00:10:39.102 fused_ordering(674) 00:10:39.102 fused_ordering(675) 00:10:39.102 fused_ordering(676) 00:10:39.102 fused_ordering(677) 00:10:39.102 fused_ordering(678) 00:10:39.102 fused_ordering(679) 00:10:39.102 fused_ordering(680) 00:10:39.102 fused_ordering(681) 00:10:39.102 fused_ordering(682) 00:10:39.102 fused_ordering(683) 00:10:39.102 fused_ordering(684) 00:10:39.102 fused_ordering(685) 00:10:39.102 fused_ordering(686) 00:10:39.102 fused_ordering(687) 00:10:39.102 fused_ordering(688) 00:10:39.102 fused_ordering(689) 00:10:39.102 fused_ordering(690) 00:10:39.102 fused_ordering(691) 00:10:39.102 fused_ordering(692) 00:10:39.102 fused_ordering(693) 00:10:39.102 fused_ordering(694) 00:10:39.102 fused_ordering(695) 00:10:39.102 fused_ordering(696) 00:10:39.102 fused_ordering(697) 00:10:39.102 fused_ordering(698) 00:10:39.102 fused_ordering(699) 00:10:39.102 fused_ordering(700) 00:10:39.102 fused_ordering(701) 00:10:39.102 fused_ordering(702) 00:10:39.102 fused_ordering(703) 00:10:39.102 fused_ordering(704) 00:10:39.102 fused_ordering(705) 00:10:39.102 fused_ordering(706) 00:10:39.102 fused_ordering(707) 00:10:39.102 fused_ordering(708) 00:10:39.102 fused_ordering(709) 00:10:39.102 fused_ordering(710) 00:10:39.102 fused_ordering(711) 00:10:39.102 fused_ordering(712) 00:10:39.102 fused_ordering(713) 00:10:39.102 fused_ordering(714) 00:10:39.102 fused_ordering(715) 00:10:39.102 fused_ordering(716) 00:10:39.102 fused_ordering(717) 00:10:39.102 fused_ordering(718) 00:10:39.102 fused_ordering(719) 00:10:39.102 fused_ordering(720) 00:10:39.102 fused_ordering(721) 00:10:39.102 fused_ordering(722) 00:10:39.102 fused_ordering(723) 00:10:39.102 fused_ordering(724) 00:10:39.102 fused_ordering(725) 00:10:39.102 fused_ordering(726) 00:10:39.102 fused_ordering(727) 00:10:39.102 fused_ordering(728) 00:10:39.102 fused_ordering(729) 00:10:39.102 fused_ordering(730) 00:10:39.102 fused_ordering(731) 00:10:39.102 fused_ordering(732) 00:10:39.102 fused_ordering(733) 00:10:39.102 fused_ordering(734) 00:10:39.102 fused_ordering(735) 00:10:39.102 fused_ordering(736) 00:10:39.102 fused_ordering(737) 00:10:39.102 fused_ordering(738) 00:10:39.102 fused_ordering(739) 00:10:39.102 fused_ordering(740) 00:10:39.102 fused_ordering(741) 00:10:39.102 fused_ordering(742) 00:10:39.102 fused_ordering(743) 00:10:39.102 fused_ordering(744) 00:10:39.102 fused_ordering(745) 00:10:39.102 fused_ordering(746) 00:10:39.102 fused_ordering(747) 00:10:39.102 fused_ordering(748) 00:10:39.102 fused_ordering(749) 00:10:39.102 fused_ordering(750) 00:10:39.102 fused_ordering(751) 00:10:39.102 fused_ordering(752) 00:10:39.102 fused_ordering(753) 00:10:39.102 fused_ordering(754) 00:10:39.102 fused_ordering(755) 00:10:39.102 fused_ordering(756) 00:10:39.102 fused_ordering(757) 00:10:39.102 fused_ordering(758) 00:10:39.102 fused_ordering(759) 00:10:39.102 fused_ordering(760) 00:10:39.102 fused_ordering(761) 00:10:39.102 fused_ordering(762) 00:10:39.102 fused_ordering(763) 00:10:39.102 fused_ordering(764) 00:10:39.102 fused_ordering(765) 00:10:39.102 fused_ordering(766) 00:10:39.102 fused_ordering(767) 00:10:39.102 fused_ordering(768) 00:10:39.102 fused_ordering(769) 00:10:39.102 fused_ordering(770) 00:10:39.102 fused_ordering(771) 00:10:39.102 fused_ordering(772) 00:10:39.102 fused_ordering(773) 00:10:39.102 fused_ordering(774) 00:10:39.102 fused_ordering(775) 00:10:39.102 fused_ordering(776) 00:10:39.103 fused_ordering(777) 00:10:39.103 fused_ordering(778) 00:10:39.103 fused_ordering(779) 00:10:39.103 fused_ordering(780) 00:10:39.103 fused_ordering(781) 00:10:39.103 fused_ordering(782) 00:10:39.103 fused_ordering(783) 00:10:39.103 fused_ordering(784) 00:10:39.103 fused_ordering(785) 00:10:39.103 fused_ordering(786) 00:10:39.103 fused_ordering(787) 00:10:39.103 fused_ordering(788) 00:10:39.103 fused_ordering(789) 00:10:39.103 fused_ordering(790) 00:10:39.103 fused_ordering(791) 00:10:39.103 fused_ordering(792) 00:10:39.103 fused_ordering(793) 00:10:39.103 fused_ordering(794) 00:10:39.103 fused_ordering(795) 00:10:39.103 fused_ordering(796) 00:10:39.103 fused_ordering(797) 00:10:39.103 fused_ordering(798) 00:10:39.103 fused_ordering(799) 00:10:39.103 fused_ordering(800) 00:10:39.103 fused_ordering(801) 00:10:39.103 fused_ordering(802) 00:10:39.103 fused_ordering(803) 00:10:39.103 fused_ordering(804) 00:10:39.103 fused_ordering(805) 00:10:39.103 fused_ordering(806) 00:10:39.103 fused_ordering(807) 00:10:39.103 fused_ordering(808) 00:10:39.103 fused_ordering(809) 00:10:39.103 fused_ordering(810) 00:10:39.103 fused_ordering(811) 00:10:39.103 fused_ordering(812) 00:10:39.103 fused_ordering(813) 00:10:39.103 fused_ordering(814) 00:10:39.103 fused_ordering(815) 00:10:39.103 fused_ordering(816) 00:10:39.103 fused_ordering(817) 00:10:39.103 fused_ordering(818) 00:10:39.103 fused_ordering(819) 00:10:39.103 fused_ordering(820) 00:10:39.669 fused_ordering(821) 00:10:39.669 fused_ordering(822) 00:10:39.669 fused_ordering(823) 00:10:39.669 fused_ordering(824) 00:10:39.669 fused_ordering(825) 00:10:39.669 fused_ordering(826) 00:10:39.669 fused_ordering(827) 00:10:39.669 fused_ordering(828) 00:10:39.669 fused_ordering(829) 00:10:39.669 fused_ordering(830) 00:10:39.669 fused_ordering(831) 00:10:39.669 fused_ordering(832) 00:10:39.669 fused_ordering(833) 00:10:39.669 fused_ordering(834) 00:10:39.669 fused_ordering(835) 00:10:39.669 fused_ordering(836) 00:10:39.669 fused_ordering(837) 00:10:39.669 fused_ordering(838) 00:10:39.669 fused_ordering(839) 00:10:39.669 fused_ordering(840) 00:10:39.669 fused_ordering(841) 00:10:39.669 fused_ordering(842) 00:10:39.669 fused_ordering(843) 00:10:39.669 fused_ordering(844) 00:10:39.669 fused_ordering(845) 00:10:39.669 fused_ordering(846) 00:10:39.669 fused_ordering(847) 00:10:39.669 fused_ordering(848) 00:10:39.669 fused_ordering(849) 00:10:39.669 fused_ordering(850) 00:10:39.669 fused_ordering(851) 00:10:39.669 fused_ordering(852) 00:10:39.669 fused_ordering(853) 00:10:39.669 fused_ordering(854) 00:10:39.669 fused_ordering(855) 00:10:39.669 fused_ordering(856) 00:10:39.669 fused_ordering(857) 00:10:39.669 fused_ordering(858) 00:10:39.669 fused_ordering(859) 00:10:39.669 fused_ordering(860) 00:10:39.669 fused_ordering(861) 00:10:39.669 fused_ordering(862) 00:10:39.669 fused_ordering(863) 00:10:39.669 fused_ordering(864) 00:10:39.669 fused_ordering(865) 00:10:39.669 fused_ordering(866) 00:10:39.669 fused_ordering(867) 00:10:39.669 fused_ordering(868) 00:10:39.669 fused_ordering(869) 00:10:39.669 fused_ordering(870) 00:10:39.669 fused_ordering(871) 00:10:39.669 fused_ordering(872) 00:10:39.669 fused_ordering(873) 00:10:39.669 fused_ordering(874) 00:10:39.669 fused_ordering(875) 00:10:39.669 fused_ordering(876) 00:10:39.669 fused_ordering(877) 00:10:39.669 fused_ordering(878) 00:10:39.669 fused_ordering(879) 00:10:39.669 fused_ordering(880) 00:10:39.669 fused_ordering(881) 00:10:39.669 fused_ordering(882) 00:10:39.669 fused_ordering(883) 00:10:39.669 fused_ordering(884) 00:10:39.669 fused_ordering(885) 00:10:39.669 fused_ordering(886) 00:10:39.669 fused_ordering(887) 00:10:39.669 fused_ordering(888) 00:10:39.669 fused_ordering(889) 00:10:39.669 fused_ordering(890) 00:10:39.669 fused_ordering(891) 00:10:39.669 fused_ordering(892) 00:10:39.669 fused_ordering(893) 00:10:39.669 fused_ordering(894) 00:10:39.669 fused_ordering(895) 00:10:39.669 fused_ordering(896) 00:10:39.669 fused_ordering(897) 00:10:39.669 fused_ordering(898) 00:10:39.669 fused_ordering(899) 00:10:39.669 fused_ordering(900) 00:10:39.669 fused_ordering(901) 00:10:39.669 fused_ordering(902) 00:10:39.669 fused_ordering(903) 00:10:39.669 fused_ordering(904) 00:10:39.669 fused_ordering(905) 00:10:39.669 fused_ordering(906) 00:10:39.669 fused_ordering(907) 00:10:39.669 fused_ordering(908) 00:10:39.669 fused_ordering(909) 00:10:39.669 fused_ordering(910) 00:10:39.669 fused_ordering(911) 00:10:39.669 fused_ordering(912) 00:10:39.669 fused_ordering(913) 00:10:39.669 fused_ordering(914) 00:10:39.669 fused_ordering(915) 00:10:39.669 fused_ordering(916) 00:10:39.669 fused_ordering(917) 00:10:39.669 fused_ordering(918) 00:10:39.669 fused_ordering(919) 00:10:39.669 fused_ordering(920) 00:10:39.669 fused_ordering(921) 00:10:39.669 fused_ordering(922) 00:10:39.669 fused_ordering(923) 00:10:39.669 fused_ordering(924) 00:10:39.669 fused_ordering(925) 00:10:39.669 fused_ordering(926) 00:10:39.669 fused_ordering(927) 00:10:39.669 fused_ordering(928) 00:10:39.669 fused_ordering(929) 00:10:39.669 fused_ordering(930) 00:10:39.669 fused_ordering(931) 00:10:39.669 fused_ordering(932) 00:10:39.669 fused_ordering(933) 00:10:39.669 fused_ordering(934) 00:10:39.669 fused_ordering(935) 00:10:39.669 fused_ordering(936) 00:10:39.669 fused_ordering(937) 00:10:39.669 fused_ordering(938) 00:10:39.669 fused_ordering(939) 00:10:39.669 fused_ordering(940) 00:10:39.669 fused_ordering(941) 00:10:39.669 fused_ordering(942) 00:10:39.669 fused_ordering(943) 00:10:39.669 fused_ordering(944) 00:10:39.669 fused_ordering(945) 00:10:39.669 fused_ordering(946) 00:10:39.669 fused_ordering(947) 00:10:39.669 fused_ordering(948) 00:10:39.669 fused_ordering(949) 00:10:39.669 fused_ordering(950) 00:10:39.669 fused_ordering(951) 00:10:39.669 fused_ordering(952) 00:10:39.669 fused_ordering(953) 00:10:39.669 fused_ordering(954) 00:10:39.669 fused_ordering(955) 00:10:39.669 fused_ordering(956) 00:10:39.669 fused_ordering(957) 00:10:39.669 fused_ordering(958) 00:10:39.669 fused_ordering(959) 00:10:39.669 fused_ordering(960) 00:10:39.669 fused_ordering(961) 00:10:39.669 fused_ordering(962) 00:10:39.669 fused_ordering(963) 00:10:39.669 fused_ordering(964) 00:10:39.669 fused_ordering(965) 00:10:39.669 fused_ordering(966) 00:10:39.669 fused_ordering(967) 00:10:39.669 fused_ordering(968) 00:10:39.669 fused_ordering(969) 00:10:39.669 fused_ordering(970) 00:10:39.669 fused_ordering(971) 00:10:39.669 fused_ordering(972) 00:10:39.669 fused_ordering(973) 00:10:39.669 fused_ordering(974) 00:10:39.669 fused_ordering(975) 00:10:39.669 fused_ordering(976) 00:10:39.669 fused_ordering(977) 00:10:39.669 fused_ordering(978) 00:10:39.669 fused_ordering(979) 00:10:39.669 fused_ordering(980) 00:10:39.669 fused_ordering(981) 00:10:39.669 fused_ordering(982) 00:10:39.669 fused_ordering(983) 00:10:39.669 fused_ordering(984) 00:10:39.669 fused_ordering(985) 00:10:39.669 fused_ordering(986) 00:10:39.669 fused_ordering(987) 00:10:39.669 fused_ordering(988) 00:10:39.669 fused_ordering(989) 00:10:39.669 fused_ordering(990) 00:10:39.669 fused_ordering(991) 00:10:39.669 fused_ordering(992) 00:10:39.669 fused_ordering(993) 00:10:39.669 fused_ordering(994) 00:10:39.669 fused_ordering(995) 00:10:39.669 fused_ordering(996) 00:10:39.669 fused_ordering(997) 00:10:39.669 fused_ordering(998) 00:10:39.669 fused_ordering(999) 00:10:39.669 fused_ordering(1000) 00:10:39.669 fused_ordering(1001) 00:10:39.669 fused_ordering(1002) 00:10:39.669 fused_ordering(1003) 00:10:39.669 fused_ordering(1004) 00:10:39.669 fused_ordering(1005) 00:10:39.669 fused_ordering(1006) 00:10:39.669 fused_ordering(1007) 00:10:39.669 fused_ordering(1008) 00:10:39.669 fused_ordering(1009) 00:10:39.669 fused_ordering(1010) 00:10:39.669 fused_ordering(1011) 00:10:39.669 fused_ordering(1012) 00:10:39.669 fused_ordering(1013) 00:10:39.669 fused_ordering(1014) 00:10:39.669 fused_ordering(1015) 00:10:39.669 fused_ordering(1016) 00:10:39.669 fused_ordering(1017) 00:10:39.669 fused_ordering(1018) 00:10:39.669 fused_ordering(1019) 00:10:39.669 fused_ordering(1020) 00:10:39.669 fused_ordering(1021) 00:10:39.669 fused_ordering(1022) 00:10:39.669 fused_ordering(1023) 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.669 rmmod nvme_tcp 00:10:39.669 rmmod nvme_fabrics 00:10:39.669 rmmod nvme_keyring 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 911831 ']' 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 911831 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@942 -- # '[' -z 911831 ']' 00:10:39.669 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # kill -0 911831 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # uname 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 911831 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # echo 'killing process with pid 911831' 00:10:39.927 killing process with pid 911831 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@961 -- # kill 911831 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # wait 911831 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.927 23:36:28 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.461 23:36:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.461 00:10:42.461 real 0m10.475s 00:10:42.461 user 0m5.444s 00:10:42.461 sys 0m5.471s 00:10:42.461 23:36:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:42.461 23:36:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 ************************************ 00:10:42.461 END TEST nvmf_fused_ordering 00:10:42.461 ************************************ 00:10:42.461 23:36:30 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:42.461 23:36:30 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:42.461 23:36:30 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:42.461 23:36:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:42.461 23:36:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:42.461 ************************************ 00:10:42.461 START TEST nvmf_delete_subsystem 00:10:42.461 ************************************ 00:10:42.461 23:36:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:42.461 * Looking for test storage... 00:10:42.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:42.461 23:36:31 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:47.796 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:47.796 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:47.796 Found net devices under 0000:86:00.0: cvl_0_0 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:47.796 Found net devices under 0000:86:00.1: cvl_0_1 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:47.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:10:47.796 00:10:47.796 --- 10.0.0.2 ping statistics --- 00:10:47.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.796 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:10:47.796 00:10:47.796 --- 10.0.0.1 ping statistics --- 00:10:47.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.796 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=915797 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 915797 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@823 -- # '[' -z 915797 ']' 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:47.796 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.797 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:47.797 23:36:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:47.797 [2024-07-15 23:36:36.712532] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:10:47.797 [2024-07-15 23:36:36.712574] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.056 [2024-07-15 23:36:36.769396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:48.056 [2024-07-15 23:36:36.841913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.056 [2024-07-15 23:36:36.841956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.056 [2024-07-15 23:36:36.841963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.056 [2024-07-15 23:36:36.841970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.056 [2024-07-15 23:36:36.841975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.056 [2024-07-15 23:36:36.842015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.056 [2024-07-15 23:36:36.842018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # return 0 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.626 [2024-07-15 23:36:37.550170] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.626 [2024-07-15 23:36:37.566334] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.626 NULL1 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.626 Delay0 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=915866 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:48.626 23:36:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:48.885 [2024-07-15 23:36:37.640858] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:50.792 23:36:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.792 23:36:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:50.792 23:36:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 [2024-07-15 23:36:39.810134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f32d400d450 is same with the state(5) to be set 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 starting I/O failed: -6 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 [2024-07-15 23:36:39.810953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62c3e0 is same with the state(5) to be set 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Write completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.052 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Read completed with error (sct=0, sc=8) 00:10:51.053 Write completed with error (sct=0, sc=8) 00:10:51.991 [2024-07-15 23:36:40.778207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62dac0 is same with the state(5) to be set 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 [2024-07-15 23:36:40.814627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62c5c0 is same with the state(5) to be set 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 [2024-07-15 23:36:40.814792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f32d4000c00 is same with the state(5) to be set 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 [2024-07-15 23:36:40.815108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f32d400cfe0 is same with the state(5) to be set 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 Write completed with error (sct=0, sc=8) 00:10:51.991 Read completed with error (sct=0, sc=8) 00:10:51.991 [2024-07-15 23:36:40.815238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f32d400d760 is same with the state(5) to be set 00:10:51.991 Initializing NVMe Controllers 00:10:51.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.991 Controller IO queue size 128, less than required. 00:10:51.991 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:51.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:51.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:51.991 Initialization complete. Launching workers. 00:10:51.991 ======================================================== 00:10:51.991 Latency(us) 00:10:51.991 Device Information : IOPS MiB/s Average min max 00:10:51.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.95 0.07 885602.99 210.94 1011189.30 00:10:51.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.76 0.09 959845.79 606.61 1043958.15 00:10:51.991 ======================================================== 00:10:51.991 Total : 332.71 0.16 925716.26 210.94 1043958.15 00:10:51.991 00:10:51.991 [2024-07-15 23:36:40.815841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62dac0 (9): Bad file descriptor 00:10:51.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:51.991 23:36:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:51.991 23:36:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:51.991 23:36:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 915866 00:10:51.992 23:36:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 915866 00:10:52.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (915866) - No such process 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 915866 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # local es=0 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # valid_exec_arg wait 915866 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@630 -- # local arg=wait 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # type -t wait 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # wait 915866 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # es=1 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.559 [2024-07-15 23:36:41.345779] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=916549 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:52.559 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:52.559 [2024-07-15 23:36:41.403108] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:53.126 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.126 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:53.126 23:36:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.693 23:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.693 23:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:53.693 23:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:53.952 23:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:53.952 23:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:53.952 23:36:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:54.519 23:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:54.519 23:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:54.519 23:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.092 23:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.092 23:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:55.092 23:36:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.661 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.661 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:55.661 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:55.661 Initializing NVMe Controllers 00:10:55.661 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:55.661 Controller IO queue size 128, less than required. 00:10:55.661 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:55.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:55.661 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:55.661 Initialization complete. Launching workers. 00:10:55.661 ======================================================== 00:10:55.661 Latency(us) 00:10:55.661 Device Information : IOPS MiB/s Average min max 00:10:55.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003214.28 1000177.39 1041814.13 00:10:55.661 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004699.10 1000239.11 1041229.88 00:10:55.661 ======================================================== 00:10:55.661 Total : 256.00 0.12 1003956.69 1000177.39 1041814.13 00:10:55.661 00:10:55.920 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:55.920 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 916549 00:10:55.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (916549) - No such process 00:10:55.920 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 916549 00:10:55.920 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:55.920 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:55.920 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.920 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:56.179 rmmod nvme_tcp 00:10:56.179 rmmod nvme_fabrics 00:10:56.179 rmmod nvme_keyring 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 915797 ']' 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 915797 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@942 -- # '[' -z 915797 ']' 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # kill -0 915797 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # uname 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 915797 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # echo 'killing process with pid 915797' 00:10:56.179 killing process with pid 915797 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@961 -- # kill 915797 00:10:56.179 23:36:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # wait 915797 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.439 23:36:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.344 23:36:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.344 00:10:58.344 real 0m16.250s 00:10:58.344 user 0m30.346s 00:10:58.344 sys 0m5.002s 00:10:58.344 23:36:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:58.344 23:36:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:58.344 ************************************ 00:10:58.344 END TEST nvmf_delete_subsystem 00:10:58.344 ************************************ 00:10:58.344 23:36:47 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:10:58.344 23:36:47 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:58.344 23:36:47 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:58.344 23:36:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:58.344 23:36:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:58.344 ************************************ 00:10:58.344 START TEST nvmf_ns_masking 00:10:58.344 ************************************ 00:10:58.344 23:36:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1117 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:58.604 * Looking for test storage... 00:10:58.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=65fc7198-3c74-4202-8da9-9b2caa46429e 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=6641aa1b-d0be-4ba1-98c7-175fd0bf7b4b 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=73fdf430-29ee-4fe9-b0a5-a0c3aa961f6d 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.604 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.605 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.605 23:36:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.605 23:36:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:03.879 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:03.879 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:03.879 Found net devices under 0000:86:00.0: cvl_0_0 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:03.879 Found net devices under 0000:86:00.1: cvl_0_1 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:11:03.879 00:11:03.879 --- 10.0.0.2 ping statistics --- 00:11:03.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.879 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:11:03.879 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:03.879 00:11:03.879 --- 10.0.0.1 ping statistics --- 00:11:03.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.880 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=920544 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 920544 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 920544 ']' 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:03.880 23:36:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:03.880 [2024-07-15 23:36:52.480671] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:03.880 [2024-07-15 23:36:52.480715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.880 [2024-07-15 23:36:52.536355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.880 [2024-07-15 23:36:52.614531] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.880 [2024-07-15 23:36:52.614564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.880 [2024-07-15 23:36:52.614571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.880 [2024-07-15 23:36:52.614577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.880 [2024-07-15 23:36:52.614582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.880 [2024-07-15 23:36:52.614601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.495 23:36:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:04.495 23:36:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:11:04.495 23:36:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.495 23:36:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.495 23:36:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:04.495 23:36:53 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.495 23:36:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.495 [2024-07-15 23:36:53.468814] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.753 23:36:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:04.753 23:36:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:04.753 23:36:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:04.753 Malloc1 00:11:04.753 23:36:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:05.032 Malloc2 00:11:05.032 23:36:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:05.291 23:36:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:05.291 23:36:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.550 [2024-07-15 23:36:54.363571] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.550 23:36:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:05.550 23:36:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73fdf430-29ee-4fe9-b0a5-a0c3aa961f6d -a 10.0.0.2 -s 4420 -i 4 00:11:05.550 23:36:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.550 23:36:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:11:05.550 23:36:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.550 23:36:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:11:05.550 23:36:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.088 [ 0]:0x1 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=69fccaefb05342579d0b32be56d6e6a2 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 69fccaefb05342579d0b32be56d6e6a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:08.088 [ 0]:0x1 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=69fccaefb05342579d0b32be56d6e6a2 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 69fccaefb05342579d0b32be56d6e6a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:08.088 [ 1]:0x2 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e3ca8e89a0a45179b4dd65f35a75151 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e3ca8e89a0a45179b4dd65f35a75151 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:08.088 23:36:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.088 23:36:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.346 23:36:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73fdf430-29ee-4fe9-b0a5-a0c3aa961f6d -a 10.0.0.2 -s 4420 -i 4 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 1 ]] 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=1 00:11:08.606 23:36:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:11:10.511 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:10.511 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:10.511 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:10.769 [ 0]:0x2 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e3ca8e89a0a45179b4dd65f35a75151 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e3ca8e89a0a45179b4dd65f35a75151 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:10.769 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.028 [ 0]:0x1 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=69fccaefb05342579d0b32be56d6e6a2 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 69fccaefb05342579d0b32be56d6e6a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:11.028 [ 1]:0x2 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e3ca8e89a0a45179b4dd65f35a75151 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e3ca8e89a0a45179b4dd65f35a75151 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.028 23:36:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:11.286 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:11.287 [ 0]:0x2 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e3ca8e89a0a45179b4dd65f35a75151 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e3ca8e89a0a45179b4dd65f35a75151 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:11.287 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.545 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73fdf430-29ee-4fe9-b0a5-a0c3aa961f6d -a 10.0.0.2 -s 4420 -i 4 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:11:11.802 23:37:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:11:13.705 23:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:13.705 23:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:13.705 23:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:13.975 [ 0]:0x1 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=69fccaefb05342579d0b32be56d6e6a2 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 69fccaefb05342579d0b32be56d6e6a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:13.975 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:14.233 [ 1]:0x2 00:11:14.233 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:14.233 23:37:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e3ca8e89a0a45179b4dd65f35a75151 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e3ca8e89a0a45179b4dd65f35a75151 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:14.233 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:14.491 [ 0]:0x2 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e3ca8e89a0a45179b4dd65f35a75151 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e3ca8e89a0a45179b4dd65f35a75151 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:14.491 [2024-07-15 23:37:03.433279] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:14.491 request: 00:11:14.491 { 00:11:14.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.491 "nsid": 2, 00:11:14.491 "host": "nqn.2016-06.io.spdk:host1", 00:11:14.491 "method": "nvmf_ns_remove_host", 00:11:14.491 "req_id": 1 00:11:14.491 } 00:11:14.491 Got JSON-RPC error response 00:11:14.491 response: 00:11:14.491 { 00:11:14.491 "code": -32602, 00:11:14.491 "message": "Invalid parameters" 00:11:14.491 } 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:14.491 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:14.748 [ 0]:0x2 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0e3ca8e89a0a45179b4dd65f35a75151 00:11:14.748 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0e3ca8e89a0a45179b4dd65f35a75151 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=922545 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 922545 /var/tmp/host.sock 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 922545 ']' 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:14.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:14.749 23:37:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:14.749 [2024-07-15 23:37:03.636182] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:14.749 [2024-07-15 23:37:03.636234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid922545 ] 00:11:14.749 [2024-07-15 23:37:03.689827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.007 [2024-07-15 23:37:03.762947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.572 23:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:15.572 23:37:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:11:15.572 23:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.830 23:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.830 23:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 65fc7198-3c74-4202-8da9-9b2caa46429e 00:11:15.830 23:37:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:15.830 23:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 65FC71983C7442028DA99B2CAA46429E -i 00:11:16.088 23:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 6641aa1b-d0be-4ba1-98c7-175fd0bf7b4b 00:11:16.088 23:37:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:16.088 23:37:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 6641AA1BD0BE4BA198C7175FD0BF7B4B -i 00:11:16.346 23:37:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:16.346 23:37:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:16.604 23:37:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:16.604 23:37:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:16.862 nvme0n1 00:11:16.862 23:37:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:16.862 23:37:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:17.120 nvme1n2 00:11:17.120 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:17.120 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:17.120 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:17.120 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:17.120 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:17.378 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:17.378 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:17.378 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:17.378 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 65fc7198-3c74-4202-8da9-9b2caa46429e == \6\5\f\c\7\1\9\8\-\3\c\7\4\-\4\2\0\2\-\8\d\a\9\-\9\b\2\c\a\a\4\6\4\2\9\e ]] 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 6641aa1b-d0be-4ba1-98c7-175fd0bf7b4b == \6\6\4\1\a\a\1\b\-\d\0\b\e\-\4\b\a\1\-\9\8\c\7\-\1\7\5\f\d\0\b\f\7\b\4\b ]] 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 922545 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 922545 ']' 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 922545 00:11:17.636 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:11:17.895 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:17.895 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 922545 00:11:17.895 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:11:17.895 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:11:17.895 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 922545' 00:11:17.895 killing process with pid 922545 00:11:17.895 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 922545 00:11:17.895 23:37:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 922545 00:11:18.154 23:37:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.413 rmmod nvme_tcp 00:11:18.413 rmmod nvme_fabrics 00:11:18.413 rmmod nvme_keyring 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 920544 ']' 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 920544 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 920544 ']' 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 920544 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 920544 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 920544' 00:11:18.413 killing process with pid 920544 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 920544 00:11:18.413 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 920544 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.672 23:37:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.579 23:37:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:20.579 00:11:20.579 real 0m22.209s 00:11:20.579 user 0m24.132s 00:11:20.579 sys 0m5.782s 00:11:20.579 23:37:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:20.579 23:37:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:20.579 ************************************ 00:11:20.579 END TEST nvmf_ns_masking 00:11:20.579 ************************************ 00:11:20.579 23:37:09 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:20.579 23:37:09 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:20.840 23:37:09 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:20.840 23:37:09 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:20.840 23:37:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:20.840 23:37:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:20.840 ************************************ 00:11:20.840 START TEST nvmf_nvme_cli 00:11:20.840 ************************************ 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:20.840 * Looking for test storage... 00:11:20.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.840 23:37:09 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.148 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.148 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:26.148 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:26.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:26.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:26.149 Found net devices under 0000:86:00.0: cvl_0_0 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:26.149 Found net devices under 0000:86:00.1: cvl_0_1 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.149 23:37:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.149 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.149 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.149 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:26.149 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:26.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:11:26.408 00:11:26.408 --- 10.0.0.2 ping statistics --- 00:11:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.408 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:11:26.408 00:11:26.408 --- 10.0.0.1 ping statistics --- 00:11:26.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.408 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=926596 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 926596 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@823 -- # '[' -z 926596 ']' 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:26.408 23:37:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:26.408 [2024-07-15 23:37:15.287093] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:26.408 [2024-07-15 23:37:15.287135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.408 [2024-07-15 23:37:15.345007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.668 [2024-07-15 23:37:15.419492] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.668 [2024-07-15 23:37:15.419533] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.668 [2024-07-15 23:37:15.419539] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.668 [2024-07-15 23:37:15.419547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.668 [2024-07-15 23:37:15.419551] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.668 [2024-07-15 23:37:15.419614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.668 [2024-07-15 23:37:15.419714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.668 [2024-07-15 23:37:15.419802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:26.668 [2024-07-15 23:37:15.419803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # return 0 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.237 [2024-07-15 23:37:16.133315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.237 Malloc0 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.237 Malloc1 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.237 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.497 [2024-07-15 23:37:16.214996] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:27.497 00:11:27.497 Discovery Log Number of Records 2, Generation counter 2 00:11:27.497 =====Discovery Log Entry 0====== 00:11:27.497 trtype: tcp 00:11:27.497 adrfam: ipv4 00:11:27.497 subtype: current discovery subsystem 00:11:27.497 treq: not required 00:11:27.497 portid: 0 00:11:27.497 trsvcid: 4420 00:11:27.497 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:27.497 traddr: 10.0.0.2 00:11:27.497 eflags: explicit discovery connections, duplicate discovery information 00:11:27.497 sectype: none 00:11:27.497 =====Discovery Log Entry 1====== 00:11:27.497 trtype: tcp 00:11:27.497 adrfam: ipv4 00:11:27.497 subtype: nvme subsystem 00:11:27.497 treq: not required 00:11:27.497 portid: 0 00:11:27.497 trsvcid: 4420 00:11:27.497 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:27.497 traddr: 10.0.0.2 00:11:27.497 eflags: none 00:11:27.497 sectype: none 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:27.497 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:27.498 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:27.498 23:37:16 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:27.498 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:27.498 23:37:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.879 23:37:17 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:28.879 23:37:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1192 -- # local i=0 00:11:28.879 23:37:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.879 23:37:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:11:28.879 23:37:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:11:28.879 23:37:17 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # sleep 2 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # return 0 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:30.784 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:30.785 /dev/nvme0n1 ]] 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:30.785 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:31.044 23:37:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1213 -- # local i=0 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # return 0 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:31.303 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.304 rmmod nvme_tcp 00:11:31.304 rmmod nvme_fabrics 00:11:31.304 rmmod nvme_keyring 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 926596 ']' 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 926596 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@942 -- # '[' -z 926596 ']' 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # kill -0 926596 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # uname 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 926596 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # echo 'killing process with pid 926596' 00:11:31.304 killing process with pid 926596 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@961 -- # kill 926596 00:11:31.304 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # wait 926596 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.563 23:37:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.101 23:37:22 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:34.101 00:11:34.101 real 0m12.948s 00:11:34.101 user 0m21.697s 00:11:34.101 sys 0m4.714s 00:11:34.101 23:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:34.101 23:37:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:34.101 ************************************ 00:11:34.101 END TEST nvmf_nvme_cli 00:11:34.101 ************************************ 00:11:34.101 23:37:22 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:11:34.101 23:37:22 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:34.101 23:37:22 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:34.101 23:37:22 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:34.101 23:37:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:34.101 23:37:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:34.101 ************************************ 00:11:34.101 START TEST nvmf_vfio_user 00:11:34.101 ************************************ 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:34.101 * Looking for test storage... 00:11:34.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=928088 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 928088' 00:11:34.101 Process pid: 928088 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 928088 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 928088 ']' 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:34.101 23:37:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:34.101 [2024-07-15 23:37:22.771124] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:34.101 [2024-07-15 23:37:22.771173] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.101 [2024-07-15 23:37:22.824352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.101 [2024-07-15 23:37:22.896549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.101 [2024-07-15 23:37:22.896588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.101 [2024-07-15 23:37:22.896595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.101 [2024-07-15 23:37:22.896601] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.101 [2024-07-15 23:37:22.896605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.101 [2024-07-15 23:37:22.896699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.101 [2024-07-15 23:37:22.896816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.101 [2024-07-15 23:37:22.896883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.101 [2024-07-15 23:37:22.896884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.670 23:37:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:34.670 23:37:23 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:11:34.670 23:37:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:36.048 23:37:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:36.048 23:37:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:36.048 23:37:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:36.048 23:37:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.048 23:37:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:36.048 23:37:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:36.048 Malloc1 00:11:36.048 23:37:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:36.307 23:37:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:36.566 23:37:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:36.826 23:37:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:36.826 23:37:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:36.826 23:37:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:36.826 Malloc2 00:11:36.826 23:37:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:37.086 23:37:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:37.346 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:37.346 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:37.346 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:37.346 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:37.346 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:37.346 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:37.346 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:37.346 [2024-07-15 23:37:26.274050] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:37.346 [2024-07-15 23:37:26.274083] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid928576 ] 00:11:37.346 [2024-07-15 23:37:26.301763] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:37.346 [2024-07-15 23:37:26.311550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:37.346 [2024-07-15 23:37:26.311573] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc0c3daf000 00:11:37.346 [2024-07-15 23:37:26.312547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.346 [2024-07-15 23:37:26.313545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.346 [2024-07-15 23:37:26.314554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.346 [2024-07-15 23:37:26.315560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:37.346 [2024-07-15 23:37:26.316563] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:37.346 [2024-07-15 23:37:26.317567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.346 [2024-07-15 23:37:26.318573] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:37.607 [2024-07-15 23:37:26.319575] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:37.607 [2024-07-15 23:37:26.320583] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:37.607 [2024-07-15 23:37:26.320592] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc0c3da4000 00:11:37.607 [2024-07-15 23:37:26.321545] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:37.607 [2024-07-15 23:37:26.334682] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:37.607 [2024-07-15 23:37:26.334706] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:37.607 [2024-07-15 23:37:26.339675] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:37.607 [2024-07-15 23:37:26.339711] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:37.607 [2024-07-15 23:37:26.339780] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:37.607 [2024-07-15 23:37:26.339797] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:37.607 [2024-07-15 23:37:26.339802] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:37.607 [2024-07-15 23:37:26.340674] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:37.607 [2024-07-15 23:37:26.340683] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:37.607 [2024-07-15 23:37:26.340689] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:37.607 [2024-07-15 23:37:26.341676] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:37.607 [2024-07-15 23:37:26.341684] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:37.607 [2024-07-15 23:37:26.341691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:37.607 [2024-07-15 23:37:26.342682] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:37.607 [2024-07-15 23:37:26.342690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:37.607 [2024-07-15 23:37:26.343690] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:37.607 [2024-07-15 23:37:26.343699] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:37.607 [2024-07-15 23:37:26.343704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:37.607 [2024-07-15 23:37:26.343709] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:37.607 [2024-07-15 23:37:26.343814] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:37.607 [2024-07-15 23:37:26.343819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:37.607 [2024-07-15 23:37:26.343824] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:37.607 [2024-07-15 23:37:26.344694] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:37.607 [2024-07-15 23:37:26.345697] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:37.607 [2024-07-15 23:37:26.346703] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:37.607 [2024-07-15 23:37:26.347703] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:37.607 [2024-07-15 23:37:26.347778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:37.607 [2024-07-15 23:37:26.348717] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:37.607 [2024-07-15 23:37:26.348725] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:37.607 [2024-07-15 23:37:26.348729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:37.607 [2024-07-15 23:37:26.348747] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:37.607 [2024-07-15 23:37:26.348755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:37.607 [2024-07-15 23:37:26.348768] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:37.607 [2024-07-15 23:37:26.348772] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.607 [2024-07-15 23:37:26.348785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.607 [2024-07-15 23:37:26.348836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:37.607 [2024-07-15 23:37:26.348845] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:37.607 [2024-07-15 23:37:26.348853] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:37.607 [2024-07-15 23:37:26.348857] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:37.607 [2024-07-15 23:37:26.348861] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:37.607 [2024-07-15 23:37:26.348868] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:37.607 [2024-07-15 23:37:26.348872] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:37.607 [2024-07-15 23:37:26.348876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:37.607 [2024-07-15 23:37:26.348883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:37.607 [2024-07-15 23:37:26.348892] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:37.607 [2024-07-15 23:37:26.348903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:37.607 [2024-07-15 23:37:26.348915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.607 [2024-07-15 23:37:26.348923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.607 [2024-07-15 23:37:26.348930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.607 [2024-07-15 23:37:26.348938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:37.607 [2024-07-15 23:37:26.348942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:37.607 [2024-07-15 23:37:26.348950] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:37.607 [2024-07-15 23:37:26.348958] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:37.607 [2024-07-15 23:37:26.348966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:37.607 [2024-07-15 23:37:26.348972] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:37.607 [2024-07-15 23:37:26.348976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:37.607 [2024-07-15 23:37:26.348982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.348987] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.348996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349072] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:37.608 [2024-07-15 23:37:26.349076] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:37.608 [2024-07-15 23:37:26.349082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349103] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:37.608 [2024-07-15 23:37:26.349110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349123] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:37.608 [2024-07-15 23:37:26.349127] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.608 [2024-07-15 23:37:26.349133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349174] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:37.608 [2024-07-15 23:37:26.349178] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.608 [2024-07-15 23:37:26.349183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349244] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:37.608 [2024-07-15 23:37:26.349248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:37.608 [2024-07-15 23:37:26.349252] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:37.608 [2024-07-15 23:37:26.349269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349307] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349344] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:37.608 [2024-07-15 23:37:26.349349] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:37.608 [2024-07-15 23:37:26.349352] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:37.608 [2024-07-15 23:37:26.349355] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:37.608 [2024-07-15 23:37:26.349361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:37.608 [2024-07-15 23:37:26.349367] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:37.608 [2024-07-15 23:37:26.349371] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:37.608 [2024-07-15 23:37:26.349377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349383] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:37.608 [2024-07-15 23:37:26.349387] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:37.608 [2024-07-15 23:37:26.349392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349399] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:37.608 [2024-07-15 23:37:26.349402] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:37.608 [2024-07-15 23:37:26.349408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:37.608 [2024-07-15 23:37:26.349414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:37.608 [2024-07-15 23:37:26.349441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:37.608 ===================================================== 00:11:37.608 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:37.608 ===================================================== 00:11:37.608 Controller Capabilities/Features 00:11:37.608 ================================ 00:11:37.608 Vendor ID: 4e58 00:11:37.608 Subsystem Vendor ID: 4e58 00:11:37.608 Serial Number: SPDK1 00:11:37.608 Model Number: SPDK bdev Controller 00:11:37.608 Firmware Version: 24.09 00:11:37.608 Recommended Arb Burst: 6 00:11:37.608 IEEE OUI Identifier: 8d 6b 50 00:11:37.608 Multi-path I/O 00:11:37.608 May have multiple subsystem ports: Yes 00:11:37.608 May have multiple controllers: Yes 00:11:37.608 Associated with SR-IOV VF: No 00:11:37.608 Max Data Transfer Size: 131072 00:11:37.608 Max Number of Namespaces: 32 00:11:37.608 Max Number of I/O Queues: 127 00:11:37.608 NVMe Specification Version (VS): 1.3 00:11:37.608 NVMe Specification Version (Identify): 1.3 00:11:37.608 Maximum Queue Entries: 256 00:11:37.608 Contiguous Queues Required: Yes 00:11:37.608 Arbitration Mechanisms Supported 00:11:37.608 Weighted Round Robin: Not Supported 00:11:37.608 Vendor Specific: Not Supported 00:11:37.608 Reset Timeout: 15000 ms 00:11:37.608 Doorbell Stride: 4 bytes 00:11:37.608 NVM Subsystem Reset: Not Supported 00:11:37.608 Command Sets Supported 00:11:37.608 NVM Command Set: Supported 00:11:37.608 Boot Partition: Not Supported 00:11:37.608 Memory Page Size Minimum: 4096 bytes 00:11:37.608 Memory Page Size Maximum: 4096 bytes 00:11:37.608 Persistent Memory Region: Not Supported 00:11:37.608 Optional Asynchronous Events Supported 00:11:37.608 Namespace Attribute Notices: Supported 00:11:37.608 Firmware Activation Notices: Not Supported 00:11:37.608 ANA Change Notices: Not Supported 00:11:37.608 PLE Aggregate Log Change Notices: Not Supported 00:11:37.608 LBA Status Info Alert Notices: Not Supported 00:11:37.608 EGE Aggregate Log Change Notices: Not Supported 00:11:37.608 Normal NVM Subsystem Shutdown event: Not Supported 00:11:37.608 Zone Descriptor Change Notices: Not Supported 00:11:37.608 Discovery Log Change Notices: Not Supported 00:11:37.608 Controller Attributes 00:11:37.608 128-bit Host Identifier: Supported 00:11:37.608 Non-Operational Permissive Mode: Not Supported 00:11:37.608 NVM Sets: Not Supported 00:11:37.608 Read Recovery Levels: Not Supported 00:11:37.608 Endurance Groups: Not Supported 00:11:37.608 Predictable Latency Mode: Not Supported 00:11:37.608 Traffic Based Keep ALive: Not Supported 00:11:37.608 Namespace Granularity: Not Supported 00:11:37.608 SQ Associations: Not Supported 00:11:37.608 UUID List: Not Supported 00:11:37.608 Multi-Domain Subsystem: Not Supported 00:11:37.608 Fixed Capacity Management: Not Supported 00:11:37.608 Variable Capacity Management: Not Supported 00:11:37.608 Delete Endurance Group: Not Supported 00:11:37.608 Delete NVM Set: Not Supported 00:11:37.608 Extended LBA Formats Supported: Not Supported 00:11:37.608 Flexible Data Placement Supported: Not Supported 00:11:37.608 00:11:37.608 Controller Memory Buffer Support 00:11:37.608 ================================ 00:11:37.608 Supported: No 00:11:37.608 00:11:37.608 Persistent Memory Region Support 00:11:37.608 ================================ 00:11:37.608 Supported: No 00:11:37.608 00:11:37.608 Admin Command Set Attributes 00:11:37.608 ============================ 00:11:37.608 Security Send/Receive: Not Supported 00:11:37.608 Format NVM: Not Supported 00:11:37.608 Firmware Activate/Download: Not Supported 00:11:37.608 Namespace Management: Not Supported 00:11:37.608 Device Self-Test: Not Supported 00:11:37.608 Directives: Not Supported 00:11:37.608 NVMe-MI: Not Supported 00:11:37.608 Virtualization Management: Not Supported 00:11:37.608 Doorbell Buffer Config: Not Supported 00:11:37.608 Get LBA Status Capability: Not Supported 00:11:37.608 Command & Feature Lockdown Capability: Not Supported 00:11:37.608 Abort Command Limit: 4 00:11:37.608 Async Event Request Limit: 4 00:11:37.608 Number of Firmware Slots: N/A 00:11:37.608 Firmware Slot 1 Read-Only: N/A 00:11:37.608 Firmware Activation Without Reset: N/A 00:11:37.608 Multiple Update Detection Support: N/A 00:11:37.608 Firmware Update Granularity: No Information Provided 00:11:37.608 Per-Namespace SMART Log: No 00:11:37.608 Asymmetric Namespace Access Log Page: Not Supported 00:11:37.608 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:37.608 Command Effects Log Page: Supported 00:11:37.608 Get Log Page Extended Data: Supported 00:11:37.608 Telemetry Log Pages: Not Supported 00:11:37.608 Persistent Event Log Pages: Not Supported 00:11:37.608 Supported Log Pages Log Page: May Support 00:11:37.608 Commands Supported & Effects Log Page: Not Supported 00:11:37.608 Feature Identifiers & Effects Log Page:May Support 00:11:37.608 NVMe-MI Commands & Effects Log Page: May Support 00:11:37.608 Data Area 4 for Telemetry Log: Not Supported 00:11:37.608 Error Log Page Entries Supported: 128 00:11:37.608 Keep Alive: Supported 00:11:37.608 Keep Alive Granularity: 10000 ms 00:11:37.608 00:11:37.608 NVM Command Set Attributes 00:11:37.608 ========================== 00:11:37.608 Submission Queue Entry Size 00:11:37.608 Max: 64 00:11:37.608 Min: 64 00:11:37.608 Completion Queue Entry Size 00:11:37.608 Max: 16 00:11:37.608 Min: 16 00:11:37.608 Number of Namespaces: 32 00:11:37.608 Compare Command: Supported 00:11:37.608 Write Uncorrectable Command: Not Supported 00:11:37.608 Dataset Management Command: Supported 00:11:37.608 Write Zeroes Command: Supported 00:11:37.608 Set Features Save Field: Not Supported 00:11:37.608 Reservations: Not Supported 00:11:37.608 Timestamp: Not Supported 00:11:37.608 Copy: Supported 00:11:37.608 Volatile Write Cache: Present 00:11:37.608 Atomic Write Unit (Normal): 1 00:11:37.608 Atomic Write Unit (PFail): 1 00:11:37.608 Atomic Compare & Write Unit: 1 00:11:37.608 Fused Compare & Write: Supported 00:11:37.608 Scatter-Gather List 00:11:37.608 SGL Command Set: Supported (Dword aligned) 00:11:37.608 SGL Keyed: Not Supported 00:11:37.608 SGL Bit Bucket Descriptor: Not Supported 00:11:37.608 SGL Metadata Pointer: Not Supported 00:11:37.608 Oversized SGL: Not Supported 00:11:37.608 SGL Metadata Address: Not Supported 00:11:37.608 SGL Offset: Not Supported 00:11:37.608 Transport SGL Data Block: Not Supported 00:11:37.609 Replay Protected Memory Block: Not Supported 00:11:37.609 00:11:37.609 Firmware Slot Information 00:11:37.609 ========================= 00:11:37.609 Active slot: 1 00:11:37.609 Slot 1 Firmware Revision: 24.09 00:11:37.609 00:11:37.609 00:11:37.609 Commands Supported and Effects 00:11:37.609 ============================== 00:11:37.609 Admin Commands 00:11:37.609 -------------- 00:11:37.609 Get Log Page (02h): Supported 00:11:37.609 Identify (06h): Supported 00:11:37.609 Abort (08h): Supported 00:11:37.609 Set Features (09h): Supported 00:11:37.609 Get Features (0Ah): Supported 00:11:37.609 Asynchronous Event Request (0Ch): Supported 00:11:37.609 Keep Alive (18h): Supported 00:11:37.609 I/O Commands 00:11:37.609 ------------ 00:11:37.609 Flush (00h): Supported LBA-Change 00:11:37.609 Write (01h): Supported LBA-Change 00:11:37.609 Read (02h): Supported 00:11:37.609 Compare (05h): Supported 00:11:37.609 Write Zeroes (08h): Supported LBA-Change 00:11:37.609 Dataset Management (09h): Supported LBA-Change 00:11:37.609 Copy (19h): Supported LBA-Change 00:11:37.609 00:11:37.609 Error Log 00:11:37.609 ========= 00:11:37.609 00:11:37.609 Arbitration 00:11:37.609 =========== 00:11:37.609 Arbitration Burst: 1 00:11:37.609 00:11:37.609 Power Management 00:11:37.609 ================ 00:11:37.609 Number of Power States: 1 00:11:37.609 Current Power State: Power State #0 00:11:37.609 Power State #0: 00:11:37.609 Max Power: 0.00 W 00:11:37.609 Non-Operational State: Operational 00:11:37.609 Entry Latency: Not Reported 00:11:37.609 Exit Latency: Not Reported 00:11:37.609 Relative Read Throughput: 0 00:11:37.609 Relative Read Latency: 0 00:11:37.609 Relative Write Throughput: 0 00:11:37.609 Relative Write Latency: 0 00:11:37.609 Idle Power: Not Reported 00:11:37.609 Active Power: Not Reported 00:11:37.609 Non-Operational Permissive Mode: Not Supported 00:11:37.609 00:11:37.609 Health Information 00:11:37.609 ================== 00:11:37.609 Critical Warnings: 00:11:37.609 Available Spare Space: OK 00:11:37.609 Temperature: OK 00:11:37.609 Device Reliability: OK 00:11:37.609 Read Only: No 00:11:37.609 Volatile Memory Backup: OK 00:11:37.609 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:37.609 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:37.609 Available Spare: 0% 00:11:37.609 Available Sp[2024-07-15 23:37:26.349526] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:37.609 [2024-07-15 23:37:26.349536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:37.609 [2024-07-15 23:37:26.349563] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:37.609 [2024-07-15 23:37:26.349571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.609 [2024-07-15 23:37:26.349578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.609 [2024-07-15 23:37:26.349583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.609 [2024-07-15 23:37:26.349589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:37.609 [2024-07-15 23:37:26.349720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:37.609 [2024-07-15 23:37:26.349730] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:37.609 [2024-07-15 23:37:26.350721] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:37.609 [2024-07-15 23:37:26.350766] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:37.609 [2024-07-15 23:37:26.350772] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:37.609 [2024-07-15 23:37:26.351728] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:37.609 [2024-07-15 23:37:26.351738] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:37.609 [2024-07-15 23:37:26.351785] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:37.609 [2024-07-15 23:37:26.357232] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:37.609 are Threshold: 0% 00:11:37.609 Life Percentage Used: 0% 00:11:37.609 Data Units Read: 0 00:11:37.609 Data Units Written: 0 00:11:37.609 Host Read Commands: 0 00:11:37.609 Host Write Commands: 0 00:11:37.609 Controller Busy Time: 0 minutes 00:11:37.609 Power Cycles: 0 00:11:37.609 Power On Hours: 0 hours 00:11:37.609 Unsafe Shutdowns: 0 00:11:37.609 Unrecoverable Media Errors: 0 00:11:37.609 Lifetime Error Log Entries: 0 00:11:37.609 Warning Temperature Time: 0 minutes 00:11:37.609 Critical Temperature Time: 0 minutes 00:11:37.609 00:11:37.609 Number of Queues 00:11:37.609 ================ 00:11:37.609 Number of I/O Submission Queues: 127 00:11:37.609 Number of I/O Completion Queues: 127 00:11:37.609 00:11:37.609 Active Namespaces 00:11:37.609 ================= 00:11:37.609 Namespace ID:1 00:11:37.609 Error Recovery Timeout: Unlimited 00:11:37.609 Command Set Identifier: NVM (00h) 00:11:37.609 Deallocate: Supported 00:11:37.609 Deallocated/Unwritten Error: Not Supported 00:11:37.609 Deallocated Read Value: Unknown 00:11:37.609 Deallocate in Write Zeroes: Not Supported 00:11:37.609 Deallocated Guard Field: 0xFFFF 00:11:37.610 Flush: Supported 00:11:37.610 Reservation: Supported 00:11:37.610 Namespace Sharing Capabilities: Multiple Controllers 00:11:37.610 Size (in LBAs): 131072 (0GiB) 00:11:37.610 Capacity (in LBAs): 131072 (0GiB) 00:11:37.610 Utilization (in LBAs): 131072 (0GiB) 00:11:37.610 NGUID: 9D0BB68AC942417688571222067BA629 00:11:37.610 UUID: 9d0bb68a-c942-4176-8857-1222067ba629 00:11:37.610 Thin Provisioning: Not Supported 00:11:37.610 Per-NS Atomic Units: Yes 00:11:37.610 Atomic Boundary Size (Normal): 0 00:11:37.610 Atomic Boundary Size (PFail): 0 00:11:37.610 Atomic Boundary Offset: 0 00:11:37.610 Maximum Single Source Range Length: 65535 00:11:37.610 Maximum Copy Length: 65535 00:11:37.610 Maximum Source Range Count: 1 00:11:37.610 NGUID/EUI64 Never Reused: No 00:11:37.610 Namespace Write Protected: No 00:11:37.610 Number of LBA Formats: 1 00:11:37.610 Current LBA Format: LBA Format #00 00:11:37.610 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:37.610 00:11:37.610 23:37:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:37.610 [2024-07-15 23:37:26.569992] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:42.910 Initializing NVMe Controllers 00:11:42.910 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:42.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:42.910 Initialization complete. Launching workers. 00:11:42.910 ======================================================== 00:11:42.910 Latency(us) 00:11:42.910 Device Information : IOPS MiB/s Average min max 00:11:42.910 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39914.19 155.91 3206.47 957.21 7597.41 00:11:42.911 ======================================================== 00:11:42.911 Total : 39914.19 155.91 3206.47 957.21 7597.41 00:11:42.911 00:11:42.911 [2024-07-15 23:37:31.587984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:42.911 23:37:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:42.911 [2024-07-15 23:37:31.814070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:48.182 Initializing NVMe Controllers 00:11:48.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:48.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:48.182 Initialization complete. Launching workers. 00:11:48.182 ======================================================== 00:11:48.182 Latency(us) 00:11:48.182 Device Information : IOPS MiB/s Average min max 00:11:48.182 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.09 62.70 7979.86 6970.05 8982.10 00:11:48.182 ======================================================== 00:11:48.182 Total : 16051.09 62.70 7979.86 6970.05 8982.10 00:11:48.182 00:11:48.182 [2024-07-15 23:37:36.856519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:48.182 23:37:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:48.182 [2024-07-15 23:37:37.046440] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:53.454 [2024-07-15 23:37:42.148696] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:53.454 Initializing NVMe Controllers 00:11:53.454 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:53.454 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:53.454 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:53.454 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:53.454 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:53.454 Initialization complete. Launching workers. 00:11:53.454 Starting thread on core 2 00:11:53.454 Starting thread on core 3 00:11:53.454 Starting thread on core 1 00:11:53.454 23:37:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:53.713 [2024-07-15 23:37:42.429619] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.001 [2024-07-15 23:37:45.483328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:57.001 Initializing NVMe Controllers 00:11:57.001 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.001 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.001 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:57.001 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:57.001 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:57.001 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:57.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:57.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:57.001 Initialization complete. Launching workers. 00:11:57.001 Starting thread on core 1 with urgent priority queue 00:11:57.001 Starting thread on core 2 with urgent priority queue 00:11:57.001 Starting thread on core 3 with urgent priority queue 00:11:57.001 Starting thread on core 0 with urgent priority queue 00:11:57.001 SPDK bdev Controller (SPDK1 ) core 0: 9241.00 IO/s 10.82 secs/100000 ios 00:11:57.001 SPDK bdev Controller (SPDK1 ) core 1: 8698.67 IO/s 11.50 secs/100000 ios 00:11:57.001 SPDK bdev Controller (SPDK1 ) core 2: 8776.67 IO/s 11.39 secs/100000 ios 00:11:57.001 SPDK bdev Controller (SPDK1 ) core 3: 9170.33 IO/s 10.90 secs/100000 ios 00:11:57.001 ======================================================== 00:11:57.001 00:11:57.001 23:37:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:57.001 [2024-07-15 23:37:45.758755] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:57.001 Initializing NVMe Controllers 00:11:57.001 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.001 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:57.001 Namespace ID: 1 size: 0GB 00:11:57.001 Initialization complete. 00:11:57.001 INFO: using host memory buffer for IO 00:11:57.001 Hello world! 00:11:57.001 [2024-07-15 23:37:45.793048] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:57.001 23:37:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:57.261 [2024-07-15 23:37:46.056310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:58.202 Initializing NVMe Controllers 00:11:58.202 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.202 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.202 Initialization complete. Launching workers. 00:11:58.202 submit (in ns) avg, min, max = 5980.6, 3298.3, 3999619.1 00:11:58.202 complete (in ns) avg, min, max = 22636.2, 1787.0, 5993028.7 00:11:58.202 00:11:58.202 Submit histogram 00:11:58.202 ================ 00:11:58.202 Range in us Cumulative Count 00:11:58.202 3.297 - 3.311: 0.0614% ( 10) 00:11:58.202 3.311 - 3.325: 0.0798% ( 3) 00:11:58.202 3.325 - 3.339: 0.2088% ( 21) 00:11:58.202 3.339 - 3.353: 0.7002% ( 80) 00:11:58.202 3.353 - 3.367: 2.3154% ( 263) 00:11:58.202 3.367 - 3.381: 6.0619% ( 610) 00:11:58.202 3.381 - 3.395: 11.0060% ( 805) 00:11:58.202 3.395 - 3.409: 16.5950% ( 910) 00:11:58.202 3.409 - 3.423: 22.6692% ( 989) 00:11:58.202 3.423 - 3.437: 28.7311% ( 987) 00:11:58.202 3.437 - 3.450: 34.2034% ( 891) 00:11:58.202 3.450 - 3.464: 39.5836% ( 876) 00:11:58.202 3.464 - 3.478: 44.3250% ( 772) 00:11:58.202 3.478 - 3.492: 48.7041% ( 713) 00:11:58.202 3.492 - 3.506: 53.2920% ( 747) 00:11:58.202 3.506 - 3.520: 59.7408% ( 1050) 00:11:58.202 3.520 - 3.534: 65.7782% ( 983) 00:11:58.202 3.534 - 3.548: 70.6424% ( 792) 00:11:58.202 3.548 - 3.562: 76.0349% ( 878) 00:11:58.202 3.562 - 3.590: 83.3743% ( 1195) 00:11:58.202 3.590 - 3.617: 86.7584% ( 551) 00:11:58.202 3.617 - 3.645: 87.6305% ( 142) 00:11:58.202 3.645 - 3.673: 88.6685% ( 169) 00:11:58.202 3.673 - 3.701: 90.2285% ( 254) 00:11:58.202 3.701 - 3.729: 91.8867% ( 270) 00:11:58.202 3.729 - 3.757: 93.4897% ( 261) 00:11:58.202 3.757 - 3.784: 95.3016% ( 295) 00:11:58.202 3.784 - 3.812: 96.9107% ( 262) 00:11:58.202 3.812 - 3.840: 98.0776% ( 190) 00:11:58.202 3.840 - 3.868: 98.7409% ( 108) 00:11:58.202 3.868 - 3.896: 99.1831% ( 72) 00:11:58.202 3.896 - 3.923: 99.4043% ( 36) 00:11:58.203 3.923 - 3.951: 99.5517% ( 24) 00:11:58.203 3.951 - 3.979: 99.5762% ( 4) 00:11:58.203 3.979 - 4.007: 99.5824% ( 1) 00:11:58.203 4.007 - 4.035: 99.5885% ( 1) 00:11:58.203 5.287 - 5.315: 99.5946% ( 1) 00:11:58.203 5.315 - 5.343: 99.6008% ( 1) 00:11:58.203 5.398 - 5.426: 99.6069% ( 1) 00:11:58.203 5.426 - 5.454: 99.6131% ( 1) 00:11:58.203 5.760 - 5.788: 99.6254% ( 2) 00:11:58.203 5.816 - 5.843: 99.6315% ( 1) 00:11:58.203 5.843 - 5.871: 99.6376% ( 1) 00:11:58.203 5.871 - 5.899: 99.6438% ( 1) 00:11:58.203 6.038 - 6.066: 99.6499% ( 1) 00:11:58.203 6.122 - 6.150: 99.6561% ( 1) 00:11:58.203 6.205 - 6.233: 99.6683% ( 2) 00:11:58.203 6.233 - 6.261: 99.6745% ( 1) 00:11:58.203 6.289 - 6.317: 99.6806% ( 1) 00:11:58.203 6.372 - 6.400: 99.6868% ( 1) 00:11:58.203 6.428 - 6.456: 99.6929% ( 1) 00:11:58.203 6.456 - 6.483: 99.7052% ( 2) 00:11:58.203 6.567 - 6.595: 99.7113% ( 1) 00:11:58.203 6.623 - 6.650: 99.7175% ( 1) 00:11:58.203 6.650 - 6.678: 99.7236% ( 1) 00:11:58.203 6.762 - 6.790: 99.7298% ( 1) 00:11:58.203 6.790 - 6.817: 99.7420% ( 2) 00:11:58.203 6.845 - 6.873: 99.7543% ( 2) 00:11:58.203 6.929 - 6.957: 99.7605% ( 1) 00:11:58.203 7.012 - 7.040: 99.7666% ( 1) 00:11:58.203 7.068 - 7.096: 99.7728% ( 1) 00:11:58.203 7.096 - 7.123: 99.7789% ( 1) 00:11:58.203 7.235 - 7.290: 99.7850% ( 1) 00:11:58.203 7.290 - 7.346: 99.7912% ( 1) 00:11:58.203 7.346 - 7.402: 99.7973% ( 1) 00:11:58.203 7.402 - 7.457: 99.8035% ( 1) 00:11:58.203 7.513 - 7.569: 99.8096% ( 1) 00:11:58.203 7.680 - 7.736: 99.8157% ( 1) 00:11:58.203 7.791 - 7.847: 99.8219% ( 1) 00:11:58.203 7.847 - 7.903: 99.8280% ( 1) 00:11:58.203 7.903 - 7.958: 99.8342% ( 1) 00:11:58.203 8.014 - 8.070: 99.8403% ( 1) 00:11:58.203 8.125 - 8.181: 99.8465% ( 1) 00:11:58.203 8.181 - 8.237: 99.8526% ( 1) 00:11:58.203 8.237 - 8.292: 99.8587% ( 1) 00:11:58.203 8.515 - 8.570: 99.8710% ( 2) 00:11:58.203 8.737 - 8.793: 99.8772% ( 1) 00:11:58.203 8.849 - 8.904: 99.8833% ( 1) 00:11:58.203 9.127 - 9.183: 99.8956% ( 2) 00:11:58.203 [2024-07-15 23:37:47.077191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:58.203 9.906 - 9.962: 99.9017% ( 1) 00:11:58.203 11.019 - 11.075: 99.9079% ( 1) 00:11:58.203 11.297 - 11.353: 99.9140% ( 1) 00:11:58.203 11.353 - 11.409: 99.9202% ( 1) 00:11:58.203 13.802 - 13.857: 99.9263% ( 1) 00:11:58.203 16.696 - 16.807: 99.9324% ( 1) 00:11:58.203 19.144 - 19.256: 99.9386% ( 1) 00:11:58.203 3989.148 - 4017.642: 100.0000% ( 10) 00:11:58.203 00:11:58.203 Complete histogram 00:11:58.203 ================== 00:11:58.203 Range in us Cumulative Count 00:11:58.203 1.781 - 1.795: 0.0061% ( 1) 00:11:58.203 1.809 - 1.823: 0.4054% ( 65) 00:11:58.203 1.823 - 1.837: 2.5611% ( 351) 00:11:58.203 1.837 - 1.850: 4.1088% ( 252) 00:11:58.203 1.850 - 1.864: 5.1099% ( 163) 00:11:58.203 1.864 - 1.878: 22.0735% ( 2762) 00:11:58.203 1.878 - 1.892: 78.7495% ( 9228) 00:11:58.203 1.892 - 1.906: 92.4886% ( 2237) 00:11:58.203 1.906 - 1.920: 95.0252% ( 413) 00:11:58.203 1.920 - 1.934: 96.1553% ( 184) 00:11:58.203 1.934 - 1.948: 96.8431% ( 112) 00:11:58.203 1.948 - 1.962: 98.0899% ( 203) 00:11:58.203 1.962 - 1.976: 98.9375% ( 138) 00:11:58.203 1.976 - 1.990: 99.2139% ( 45) 00:11:58.203 1.990 - 2.003: 99.2876% ( 12) 00:11:58.203 2.003 - 2.017: 99.3121% ( 4) 00:11:58.203 2.031 - 2.045: 99.3183% ( 1) 00:11:58.203 2.045 - 2.059: 99.3244% ( 1) 00:11:58.203 2.101 - 2.115: 99.3305% ( 1) 00:11:58.203 4.007 - 4.035: 99.3367% ( 1) 00:11:58.203 4.035 - 4.063: 99.3428% ( 1) 00:11:58.203 4.063 - 4.090: 99.3490% ( 1) 00:11:58.203 4.563 - 4.591: 99.3551% ( 1) 00:11:58.203 4.730 - 4.758: 99.3613% ( 1) 00:11:58.203 4.814 - 4.842: 99.3674% ( 1) 00:11:58.203 4.870 - 4.897: 99.3735% ( 1) 00:11:58.203 4.925 - 4.953: 99.3858% ( 2) 00:11:58.203 4.953 - 4.981: 99.3920% ( 1) 00:11:58.203 5.037 - 5.064: 99.3981% ( 1) 00:11:58.203 5.315 - 5.343: 99.4104% ( 2) 00:11:58.203 5.370 - 5.398: 99.4227% ( 2) 00:11:58.203 5.482 - 5.510: 99.4288% ( 1) 00:11:58.203 5.537 - 5.565: 99.4350% ( 1) 00:11:58.203 5.677 - 5.704: 99.4411% ( 1) 00:11:58.203 5.927 - 5.955: 99.4472% ( 1) 00:11:58.203 5.983 - 6.010: 99.4534% ( 1) 00:11:58.203 6.038 - 6.066: 99.4595% ( 1) 00:11:58.203 6.177 - 6.205: 99.4657% ( 1) 00:11:58.203 6.483 - 6.511: 99.4718% ( 1) 00:11:58.203 8.515 - 8.570: 99.4780% ( 1) 00:11:58.203 39.624 - 39.847: 99.4841% ( 1) 00:11:58.203 2179.784 - 2194.031: 99.4902% ( 1) 00:11:58.203 3989.148 - 4017.642: 99.9877% ( 81) 00:11:58.203 5983.722 - 6012.216: 100.0000% ( 2) 00:11:58.203 00:11:58.203 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:58.203 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:58.203 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:58.203 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:58.203 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:58.463 [ 00:11:58.463 { 00:11:58.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.463 "subtype": "Discovery", 00:11:58.463 "listen_addresses": [], 00:11:58.463 "allow_any_host": true, 00:11:58.463 "hosts": [] 00:11:58.463 }, 00:11:58.463 { 00:11:58.463 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:58.463 "subtype": "NVMe", 00:11:58.463 "listen_addresses": [ 00:11:58.463 { 00:11:58.463 "trtype": "VFIOUSER", 00:11:58.463 "adrfam": "IPv4", 00:11:58.463 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:58.463 "trsvcid": "0" 00:11:58.463 } 00:11:58.463 ], 00:11:58.463 "allow_any_host": true, 00:11:58.463 "hosts": [], 00:11:58.463 "serial_number": "SPDK1", 00:11:58.463 "model_number": "SPDK bdev Controller", 00:11:58.463 "max_namespaces": 32, 00:11:58.463 "min_cntlid": 1, 00:11:58.463 "max_cntlid": 65519, 00:11:58.463 "namespaces": [ 00:11:58.463 { 00:11:58.463 "nsid": 1, 00:11:58.463 "bdev_name": "Malloc1", 00:11:58.463 "name": "Malloc1", 00:11:58.463 "nguid": "9D0BB68AC942417688571222067BA629", 00:11:58.463 "uuid": "9d0bb68a-c942-4176-8857-1222067ba629" 00:11:58.463 } 00:11:58.463 ] 00:11:58.463 }, 00:11:58.463 { 00:11:58.463 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:58.463 "subtype": "NVMe", 00:11:58.463 "listen_addresses": [ 00:11:58.463 { 00:11:58.463 "trtype": "VFIOUSER", 00:11:58.463 "adrfam": "IPv4", 00:11:58.463 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:58.463 "trsvcid": "0" 00:11:58.463 } 00:11:58.463 ], 00:11:58.463 "allow_any_host": true, 00:11:58.463 "hosts": [], 00:11:58.463 "serial_number": "SPDK2", 00:11:58.463 "model_number": "SPDK bdev Controller", 00:11:58.463 "max_namespaces": 32, 00:11:58.463 "min_cntlid": 1, 00:11:58.463 "max_cntlid": 65519, 00:11:58.463 "namespaces": [ 00:11:58.463 { 00:11:58.463 "nsid": 1, 00:11:58.463 "bdev_name": "Malloc2", 00:11:58.463 "name": "Malloc2", 00:11:58.463 "nguid": "EBEC905C162A44A294A738986E4B9F04", 00:11:58.463 "uuid": "ebec905c-162a-44a2-94a7-38986e4b9f04" 00:11:58.463 } 00:11:58.463 ] 00:11:58.463 } 00:11:58.463 ] 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=932091 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:58.463 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:58.722 [2024-07-15 23:37:47.454687] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:58.722 Malloc3 00:11:58.722 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:58.722 [2024-07-15 23:37:47.689435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:58.981 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:58.981 Asynchronous Event Request test 00:11:58.981 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.981 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:58.981 Registering asynchronous event callbacks... 00:11:58.981 Starting namespace attribute notice tests for all controllers... 00:11:58.981 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:58.981 aer_cb - Changed Namespace 00:11:58.981 Cleaning up... 00:11:58.981 [ 00:11:58.981 { 00:11:58.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:58.981 "subtype": "Discovery", 00:11:58.981 "listen_addresses": [], 00:11:58.981 "allow_any_host": true, 00:11:58.981 "hosts": [] 00:11:58.981 }, 00:11:58.981 { 00:11:58.981 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:58.981 "subtype": "NVMe", 00:11:58.981 "listen_addresses": [ 00:11:58.981 { 00:11:58.981 "trtype": "VFIOUSER", 00:11:58.981 "adrfam": "IPv4", 00:11:58.981 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:58.981 "trsvcid": "0" 00:11:58.981 } 00:11:58.981 ], 00:11:58.981 "allow_any_host": true, 00:11:58.981 "hosts": [], 00:11:58.981 "serial_number": "SPDK1", 00:11:58.981 "model_number": "SPDK bdev Controller", 00:11:58.981 "max_namespaces": 32, 00:11:58.981 "min_cntlid": 1, 00:11:58.981 "max_cntlid": 65519, 00:11:58.981 "namespaces": [ 00:11:58.981 { 00:11:58.981 "nsid": 1, 00:11:58.981 "bdev_name": "Malloc1", 00:11:58.981 "name": "Malloc1", 00:11:58.981 "nguid": "9D0BB68AC942417688571222067BA629", 00:11:58.981 "uuid": "9d0bb68a-c942-4176-8857-1222067ba629" 00:11:58.981 }, 00:11:58.981 { 00:11:58.981 "nsid": 2, 00:11:58.981 "bdev_name": "Malloc3", 00:11:58.981 "name": "Malloc3", 00:11:58.981 "nguid": "C1A95CBF07614927A13B8C9D6474E069", 00:11:58.981 "uuid": "c1a95cbf-0761-4927-a13b-8c9d6474e069" 00:11:58.981 } 00:11:58.981 ] 00:11:58.981 }, 00:11:58.981 { 00:11:58.981 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:58.981 "subtype": "NVMe", 00:11:58.981 "listen_addresses": [ 00:11:58.981 { 00:11:58.981 "trtype": "VFIOUSER", 00:11:58.981 "adrfam": "IPv4", 00:11:58.981 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:58.981 "trsvcid": "0" 00:11:58.981 } 00:11:58.981 ], 00:11:58.981 "allow_any_host": true, 00:11:58.981 "hosts": [], 00:11:58.981 "serial_number": "SPDK2", 00:11:58.981 "model_number": "SPDK bdev Controller", 00:11:58.981 "max_namespaces": 32, 00:11:58.981 "min_cntlid": 1, 00:11:58.981 "max_cntlid": 65519, 00:11:58.981 "namespaces": [ 00:11:58.981 { 00:11:58.981 "nsid": 1, 00:11:58.981 "bdev_name": "Malloc2", 00:11:58.981 "name": "Malloc2", 00:11:58.981 "nguid": "EBEC905C162A44A294A738986E4B9F04", 00:11:58.981 "uuid": "ebec905c-162a-44a2-94a7-38986e4b9f04" 00:11:58.981 } 00:11:58.981 ] 00:11:58.981 } 00:11:58.981 ] 00:11:58.981 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 932091 00:11:58.981 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:58.981 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:58.981 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:58.981 23:37:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:58.981 [2024-07-15 23:37:47.918555] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:58.981 [2024-07-15 23:37:47.918602] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid932254 ] 00:11:58.981 [2024-07-15 23:37:47.946624] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:59.242 [2024-07-15 23:37:47.957128] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:59.242 [2024-07-15 23:37:47.957151] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f96280d7000 00:11:59.242 [2024-07-15 23:37:47.958138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.959142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.960152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.961160] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.962170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.963172] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.964181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.965191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.242 [2024-07-15 23:37:47.966199] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:59.242 [2024-07-15 23:37:47.966209] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f96280cc000 00:11:59.242 [2024-07-15 23:37:47.967149] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:59.242 [2024-07-15 23:37:47.979680] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:59.242 [2024-07-15 23:37:47.979702] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:59.242 [2024-07-15 23:37:47.984789] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:59.242 [2024-07-15 23:37:47.984830] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:59.242 [2024-07-15 23:37:47.984899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:59.242 [2024-07-15 23:37:47.984915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:59.242 [2024-07-15 23:37:47.984920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:59.242 [2024-07-15 23:37:47.985792] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:59.242 [2024-07-15 23:37:47.985802] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:59.242 [2024-07-15 23:37:47.985809] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:59.242 [2024-07-15 23:37:47.986796] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:59.242 [2024-07-15 23:37:47.986805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:59.242 [2024-07-15 23:37:47.986811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:59.242 [2024-07-15 23:37:47.987808] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:59.242 [2024-07-15 23:37:47.987817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:59.242 [2024-07-15 23:37:47.988814] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:59.242 [2024-07-15 23:37:47.988823] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:59.242 [2024-07-15 23:37:47.988827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:59.242 [2024-07-15 23:37:47.988833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:59.242 [2024-07-15 23:37:47.988938] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:59.242 [2024-07-15 23:37:47.988942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:59.242 [2024-07-15 23:37:47.988947] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:59.242 [2024-07-15 23:37:47.989818] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:59.242 [2024-07-15 23:37:47.990828] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:59.242 [2024-07-15 23:37:47.991832] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:59.242 [2024-07-15 23:37:47.992838] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:59.242 [2024-07-15 23:37:47.992875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:59.242 [2024-07-15 23:37:47.993849] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:59.242 [2024-07-15 23:37:47.993857] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:59.242 [2024-07-15 23:37:47.993862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:47.993879] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:59.242 [2024-07-15 23:37:47.993886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:47.993897] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.242 [2024-07-15 23:37:47.993901] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.242 [2024-07-15 23:37:47.993911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.000119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.000134] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:59.242 [2024-07-15 23:37:48.000141] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:59.242 [2024-07-15 23:37:48.000186] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:59.242 [2024-07-15 23:37:48.000191] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:59.242 [2024-07-15 23:37:48.000195] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:59.242 [2024-07-15 23:37:48.000199] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:59.242 [2024-07-15 23:37:48.000203] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.000211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.000220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.008232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.008246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.242 [2024-07-15 23:37:48.008253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.242 [2024-07-15 23:37:48.008261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.242 [2024-07-15 23:37:48.008268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.242 [2024-07-15 23:37:48.008273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.008280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.008288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.016231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.016239] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:59.242 [2024-07-15 23:37:48.016243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.016249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.016254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.016262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.024234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.024286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.024295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.024302] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:59.242 [2024-07-15 23:37:48.024306] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:59.242 [2024-07-15 23:37:48.024313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.032233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.032245] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:59.242 [2024-07-15 23:37:48.032253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.032260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.032266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.242 [2024-07-15 23:37:48.032270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.242 [2024-07-15 23:37:48.032276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.040230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.040244] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.040251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.040258] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.242 [2024-07-15 23:37:48.040262] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.242 [2024-07-15 23:37:48.040268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.048232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.048241] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.048247] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.048254] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.048260] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.048264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.048268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.048277] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:59.242 [2024-07-15 23:37:48.048281] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:59.242 [2024-07-15 23:37:48.048286] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:59.242 [2024-07-15 23:37:48.048301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.056231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.056243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.064258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.064271] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.072229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.072241] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:59.242 [2024-07-15 23:37:48.080231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:59.242 [2024-07-15 23:37:48.080247] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:59.242 [2024-07-15 23:37:48.080251] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:59.242 [2024-07-15 23:37:48.080254] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:59.242 [2024-07-15 23:37:48.080257] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:59.242 [2024-07-15 23:37:48.080263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:59.242 [2024-07-15 23:37:48.080270] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:59.243 [2024-07-15 23:37:48.080274] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:59.243 [2024-07-15 23:37:48.080279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:59.243 [2024-07-15 23:37:48.080286] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:59.243 [2024-07-15 23:37:48.080289] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.243 [2024-07-15 23:37:48.080295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.243 [2024-07-15 23:37:48.080301] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:59.243 [2024-07-15 23:37:48.080305] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:59.243 [2024-07-15 23:37:48.080311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:59.243 [2024-07-15 23:37:48.088232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.088245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.088256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.088262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:59.243 ===================================================== 00:11:59.243 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:59.243 ===================================================== 00:11:59.243 Controller Capabilities/Features 00:11:59.243 ================================ 00:11:59.243 Vendor ID: 4e58 00:11:59.243 Subsystem Vendor ID: 4e58 00:11:59.243 Serial Number: SPDK2 00:11:59.243 Model Number: SPDK bdev Controller 00:11:59.243 Firmware Version: 24.09 00:11:59.243 Recommended Arb Burst: 6 00:11:59.243 IEEE OUI Identifier: 8d 6b 50 00:11:59.243 Multi-path I/O 00:11:59.243 May have multiple subsystem ports: Yes 00:11:59.243 May have multiple controllers: Yes 00:11:59.243 Associated with SR-IOV VF: No 00:11:59.243 Max Data Transfer Size: 131072 00:11:59.243 Max Number of Namespaces: 32 00:11:59.243 Max Number of I/O Queues: 127 00:11:59.243 NVMe Specification Version (VS): 1.3 00:11:59.243 NVMe Specification Version (Identify): 1.3 00:11:59.243 Maximum Queue Entries: 256 00:11:59.243 Contiguous Queues Required: Yes 00:11:59.243 Arbitration Mechanisms Supported 00:11:59.243 Weighted Round Robin: Not Supported 00:11:59.243 Vendor Specific: Not Supported 00:11:59.243 Reset Timeout: 15000 ms 00:11:59.243 Doorbell Stride: 4 bytes 00:11:59.243 NVM Subsystem Reset: Not Supported 00:11:59.243 Command Sets Supported 00:11:59.243 NVM Command Set: Supported 00:11:59.243 Boot Partition: Not Supported 00:11:59.243 Memory Page Size Minimum: 4096 bytes 00:11:59.243 Memory Page Size Maximum: 4096 bytes 00:11:59.243 Persistent Memory Region: Not Supported 00:11:59.243 Optional Asynchronous Events Supported 00:11:59.243 Namespace Attribute Notices: Supported 00:11:59.243 Firmware Activation Notices: Not Supported 00:11:59.243 ANA Change Notices: Not Supported 00:11:59.243 PLE Aggregate Log Change Notices: Not Supported 00:11:59.243 LBA Status Info Alert Notices: Not Supported 00:11:59.243 EGE Aggregate Log Change Notices: Not Supported 00:11:59.243 Normal NVM Subsystem Shutdown event: Not Supported 00:11:59.243 Zone Descriptor Change Notices: Not Supported 00:11:59.243 Discovery Log Change Notices: Not Supported 00:11:59.243 Controller Attributes 00:11:59.243 128-bit Host Identifier: Supported 00:11:59.243 Non-Operational Permissive Mode: Not Supported 00:11:59.243 NVM Sets: Not Supported 00:11:59.243 Read Recovery Levels: Not Supported 00:11:59.243 Endurance Groups: Not Supported 00:11:59.243 Predictable Latency Mode: Not Supported 00:11:59.243 Traffic Based Keep ALive: Not Supported 00:11:59.243 Namespace Granularity: Not Supported 00:11:59.243 SQ Associations: Not Supported 00:11:59.243 UUID List: Not Supported 00:11:59.243 Multi-Domain Subsystem: Not Supported 00:11:59.243 Fixed Capacity Management: Not Supported 00:11:59.243 Variable Capacity Management: Not Supported 00:11:59.243 Delete Endurance Group: Not Supported 00:11:59.243 Delete NVM Set: Not Supported 00:11:59.243 Extended LBA Formats Supported: Not Supported 00:11:59.243 Flexible Data Placement Supported: Not Supported 00:11:59.243 00:11:59.243 Controller Memory Buffer Support 00:11:59.243 ================================ 00:11:59.243 Supported: No 00:11:59.243 00:11:59.243 Persistent Memory Region Support 00:11:59.243 ================================ 00:11:59.243 Supported: No 00:11:59.243 00:11:59.243 Admin Command Set Attributes 00:11:59.243 ============================ 00:11:59.243 Security Send/Receive: Not Supported 00:11:59.243 Format NVM: Not Supported 00:11:59.243 Firmware Activate/Download: Not Supported 00:11:59.243 Namespace Management: Not Supported 00:11:59.243 Device Self-Test: Not Supported 00:11:59.243 Directives: Not Supported 00:11:59.243 NVMe-MI: Not Supported 00:11:59.243 Virtualization Management: Not Supported 00:11:59.243 Doorbell Buffer Config: Not Supported 00:11:59.243 Get LBA Status Capability: Not Supported 00:11:59.243 Command & Feature Lockdown Capability: Not Supported 00:11:59.243 Abort Command Limit: 4 00:11:59.243 Async Event Request Limit: 4 00:11:59.243 Number of Firmware Slots: N/A 00:11:59.243 Firmware Slot 1 Read-Only: N/A 00:11:59.243 Firmware Activation Without Reset: N/A 00:11:59.243 Multiple Update Detection Support: N/A 00:11:59.243 Firmware Update Granularity: No Information Provided 00:11:59.243 Per-Namespace SMART Log: No 00:11:59.243 Asymmetric Namespace Access Log Page: Not Supported 00:11:59.243 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:59.243 Command Effects Log Page: Supported 00:11:59.243 Get Log Page Extended Data: Supported 00:11:59.243 Telemetry Log Pages: Not Supported 00:11:59.243 Persistent Event Log Pages: Not Supported 00:11:59.243 Supported Log Pages Log Page: May Support 00:11:59.243 Commands Supported & Effects Log Page: Not Supported 00:11:59.243 Feature Identifiers & Effects Log Page:May Support 00:11:59.243 NVMe-MI Commands & Effects Log Page: May Support 00:11:59.243 Data Area 4 for Telemetry Log: Not Supported 00:11:59.243 Error Log Page Entries Supported: 128 00:11:59.243 Keep Alive: Supported 00:11:59.243 Keep Alive Granularity: 10000 ms 00:11:59.243 00:11:59.243 NVM Command Set Attributes 00:11:59.243 ========================== 00:11:59.243 Submission Queue Entry Size 00:11:59.243 Max: 64 00:11:59.243 Min: 64 00:11:59.243 Completion Queue Entry Size 00:11:59.243 Max: 16 00:11:59.243 Min: 16 00:11:59.243 Number of Namespaces: 32 00:11:59.243 Compare Command: Supported 00:11:59.243 Write Uncorrectable Command: Not Supported 00:11:59.243 Dataset Management Command: Supported 00:11:59.243 Write Zeroes Command: Supported 00:11:59.243 Set Features Save Field: Not Supported 00:11:59.243 Reservations: Not Supported 00:11:59.243 Timestamp: Not Supported 00:11:59.243 Copy: Supported 00:11:59.243 Volatile Write Cache: Present 00:11:59.243 Atomic Write Unit (Normal): 1 00:11:59.243 Atomic Write Unit (PFail): 1 00:11:59.243 Atomic Compare & Write Unit: 1 00:11:59.243 Fused Compare & Write: Supported 00:11:59.243 Scatter-Gather List 00:11:59.243 SGL Command Set: Supported (Dword aligned) 00:11:59.243 SGL Keyed: Not Supported 00:11:59.243 SGL Bit Bucket Descriptor: Not Supported 00:11:59.243 SGL Metadata Pointer: Not Supported 00:11:59.243 Oversized SGL: Not Supported 00:11:59.243 SGL Metadata Address: Not Supported 00:11:59.243 SGL Offset: Not Supported 00:11:59.243 Transport SGL Data Block: Not Supported 00:11:59.243 Replay Protected Memory Block: Not Supported 00:11:59.243 00:11:59.243 Firmware Slot Information 00:11:59.243 ========================= 00:11:59.243 Active slot: 1 00:11:59.243 Slot 1 Firmware Revision: 24.09 00:11:59.243 00:11:59.243 00:11:59.243 Commands Supported and Effects 00:11:59.243 ============================== 00:11:59.243 Admin Commands 00:11:59.243 -------------- 00:11:59.243 Get Log Page (02h): Supported 00:11:59.243 Identify (06h): Supported 00:11:59.243 Abort (08h): Supported 00:11:59.243 Set Features (09h): Supported 00:11:59.243 Get Features (0Ah): Supported 00:11:59.243 Asynchronous Event Request (0Ch): Supported 00:11:59.243 Keep Alive (18h): Supported 00:11:59.243 I/O Commands 00:11:59.243 ------------ 00:11:59.243 Flush (00h): Supported LBA-Change 00:11:59.243 Write (01h): Supported LBA-Change 00:11:59.243 Read (02h): Supported 00:11:59.243 Compare (05h): Supported 00:11:59.243 Write Zeroes (08h): Supported LBA-Change 00:11:59.243 Dataset Management (09h): Supported LBA-Change 00:11:59.243 Copy (19h): Supported LBA-Change 00:11:59.243 00:11:59.243 Error Log 00:11:59.243 ========= 00:11:59.243 00:11:59.243 Arbitration 00:11:59.243 =========== 00:11:59.243 Arbitration Burst: 1 00:11:59.243 00:11:59.243 Power Management 00:11:59.243 ================ 00:11:59.243 Number of Power States: 1 00:11:59.243 Current Power State: Power State #0 00:11:59.243 Power State #0: 00:11:59.243 Max Power: 0.00 W 00:11:59.243 Non-Operational State: Operational 00:11:59.243 Entry Latency: Not Reported 00:11:59.243 Exit Latency: Not Reported 00:11:59.243 Relative Read Throughput: 0 00:11:59.243 Relative Read Latency: 0 00:11:59.243 Relative Write Throughput: 0 00:11:59.243 Relative Write Latency: 0 00:11:59.243 Idle Power: Not Reported 00:11:59.243 Active Power: Not Reported 00:11:59.243 Non-Operational Permissive Mode: Not Supported 00:11:59.243 00:11:59.243 Health Information 00:11:59.243 ================== 00:11:59.243 Critical Warnings: 00:11:59.243 Available Spare Space: OK 00:11:59.243 Temperature: OK 00:11:59.243 Device Reliability: OK 00:11:59.243 Read Only: No 00:11:59.243 Volatile Memory Backup: OK 00:11:59.243 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:59.243 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:59.243 Available Spare: 0% 00:11:59.243 Available Sp[2024-07-15 23:37:48.088343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:59.243 [2024-07-15 23:37:48.096230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.096261] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:59.243 [2024-07-15 23:37:48.096269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.096275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.096280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.096285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.243 [2024-07-15 23:37:48.096326] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:59.243 [2024-07-15 23:37:48.096337] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:59.243 [2024-07-15 23:37:48.097327] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:59.243 [2024-07-15 23:37:48.097368] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:59.243 [2024-07-15 23:37:48.097374] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:59.243 [2024-07-15 23:37:48.098334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:59.243 [2024-07-15 23:37:48.098345] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:11:59.243 [2024-07-15 23:37:48.098390] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:59.243 [2024-07-15 23:37:48.099447] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:59.243 are Threshold: 0% 00:11:59.243 Life Percentage Used: 0% 00:11:59.243 Data Units Read: 0 00:11:59.243 Data Units Written: 0 00:11:59.243 Host Read Commands: 0 00:11:59.243 Host Write Commands: 0 00:11:59.243 Controller Busy Time: 0 minutes 00:11:59.243 Power Cycles: 0 00:11:59.243 Power On Hours: 0 hours 00:11:59.243 Unsafe Shutdowns: 0 00:11:59.243 Unrecoverable Media Errors: 0 00:11:59.243 Lifetime Error Log Entries: 0 00:11:59.243 Warning Temperature Time: 0 minutes 00:11:59.243 Critical Temperature Time: 0 minutes 00:11:59.243 00:11:59.243 Number of Queues 00:11:59.243 ================ 00:11:59.243 Number of I/O Submission Queues: 127 00:11:59.243 Number of I/O Completion Queues: 127 00:11:59.243 00:11:59.243 Active Namespaces 00:11:59.243 ================= 00:11:59.243 Namespace ID:1 00:11:59.243 Error Recovery Timeout: Unlimited 00:11:59.243 Command Set Identifier: NVM (00h) 00:11:59.243 Deallocate: Supported 00:11:59.243 Deallocated/Unwritten Error: Not Supported 00:11:59.243 Deallocated Read Value: Unknown 00:11:59.243 Deallocate in Write Zeroes: Not Supported 00:11:59.243 Deallocated Guard Field: 0xFFFF 00:11:59.243 Flush: Supported 00:11:59.243 Reservation: Supported 00:11:59.243 Namespace Sharing Capabilities: Multiple Controllers 00:11:59.243 Size (in LBAs): 131072 (0GiB) 00:11:59.243 Capacity (in LBAs): 131072 (0GiB) 00:11:59.243 Utilization (in LBAs): 131072 (0GiB) 00:11:59.243 NGUID: EBEC905C162A44A294A738986E4B9F04 00:11:59.243 UUID: ebec905c-162a-44a2-94a7-38986e4b9f04 00:11:59.243 Thin Provisioning: Not Supported 00:11:59.243 Per-NS Atomic Units: Yes 00:11:59.243 Atomic Boundary Size (Normal): 0 00:11:59.243 Atomic Boundary Size (PFail): 0 00:11:59.243 Atomic Boundary Offset: 0 00:11:59.243 Maximum Single Source Range Length: 65535 00:11:59.243 Maximum Copy Length: 65535 00:11:59.243 Maximum Source Range Count: 1 00:11:59.243 NGUID/EUI64 Never Reused: No 00:11:59.243 Namespace Write Protected: No 00:11:59.243 Number of LBA Formats: 1 00:11:59.243 Current LBA Format: LBA Format #00 00:11:59.243 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:59.243 00:11:59.243 23:37:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:59.502 [2024-07-15 23:37:48.321653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:04.778 Initializing NVMe Controllers 00:12:04.778 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:04.778 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:04.778 Initialization complete. Launching workers. 00:12:04.778 ======================================================== 00:12:04.778 Latency(us) 00:12:04.778 Device Information : IOPS MiB/s Average min max 00:12:04.778 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39833.76 155.60 3213.18 972.71 10603.45 00:12:04.778 ======================================================== 00:12:04.778 Total : 39833.76 155.60 3213.18 972.71 10603.45 00:12:04.778 00:12:04.778 [2024-07-15 23:37:53.431475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:04.778 23:37:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:04.778 [2024-07-15 23:37:53.657130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:10.055 Initializing NVMe Controllers 00:12:10.055 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:10.055 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:10.055 Initialization complete. Launching workers. 00:12:10.055 ======================================================== 00:12:10.055 Latency(us) 00:12:10.055 Device Information : IOPS MiB/s Average min max 00:12:10.055 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39927.40 155.97 3205.41 971.85 7592.86 00:12:10.055 ======================================================== 00:12:10.055 Total : 39927.40 155.97 3205.41 971.85 7592.86 00:12:10.055 00:12:10.055 [2024-07-15 23:37:58.676276] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:10.055 23:37:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:10.055 [2024-07-15 23:37:58.858664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:15.366 [2024-07-15 23:38:04.007320] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:15.366 Initializing NVMe Controllers 00:12:15.366 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:15.366 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:15.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:15.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:15.367 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:15.367 Initialization complete. Launching workers. 00:12:15.367 Starting thread on core 2 00:12:15.367 Starting thread on core 3 00:12:15.367 Starting thread on core 1 00:12:15.367 23:38:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:15.367 [2024-07-15 23:38:04.293686] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.651 [2024-07-15 23:38:07.353893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.651 Initializing NVMe Controllers 00:12:18.651 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.651 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:18.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:18.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:18.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:18.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:18.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:18.651 Initialization complete. Launching workers. 00:12:18.651 Starting thread on core 1 with urgent priority queue 00:12:18.651 Starting thread on core 2 with urgent priority queue 00:12:18.651 Starting thread on core 3 with urgent priority queue 00:12:18.651 Starting thread on core 0 with urgent priority queue 00:12:18.651 SPDK bdev Controller (SPDK2 ) core 0: 10323.33 IO/s 9.69 secs/100000 ios 00:12:18.651 SPDK bdev Controller (SPDK2 ) core 1: 9087.33 IO/s 11.00 secs/100000 ios 00:12:18.651 SPDK bdev Controller (SPDK2 ) core 2: 7643.67 IO/s 13.08 secs/100000 ios 00:12:18.651 SPDK bdev Controller (SPDK2 ) core 3: 9605.33 IO/s 10.41 secs/100000 ios 00:12:18.651 ======================================================== 00:12:18.651 00:12:18.651 23:38:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:18.909 [2024-07-15 23:38:07.626658] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:18.909 Initializing NVMe Controllers 00:12:18.909 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.909 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:18.909 Namespace ID: 1 size: 0GB 00:12:18.909 Initialization complete. 00:12:18.909 INFO: using host memory buffer for IO 00:12:18.909 Hello world! 00:12:18.909 [2024-07-15 23:38:07.636718] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:18.909 23:38:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:19.166 [2024-07-15 23:38:07.906535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.102 Initializing NVMe Controllers 00:12:20.102 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.102 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.102 Initialization complete. Launching workers. 00:12:20.102 submit (in ns) avg, min, max = 6428.3, 3240.9, 4006051.3 00:12:20.102 complete (in ns) avg, min, max = 19509.3, 1800.0, 4007013.0 00:12:20.102 00:12:20.102 Submit histogram 00:12:20.102 ================ 00:12:20.102 Range in us Cumulative Count 00:12:20.102 3.228 - 3.242: 0.0061% ( 1) 00:12:20.102 3.242 - 3.256: 0.0123% ( 1) 00:12:20.102 3.256 - 3.270: 0.0246% ( 2) 00:12:20.102 3.270 - 3.283: 0.0737% ( 8) 00:12:20.102 3.283 - 3.297: 0.4175% ( 56) 00:12:20.102 3.297 - 3.311: 2.0262% ( 262) 00:12:20.102 3.311 - 3.325: 4.7891% ( 450) 00:12:20.102 3.325 - 3.339: 8.0739% ( 535) 00:12:20.102 3.339 - 3.353: 12.2797% ( 685) 00:12:20.102 3.353 - 3.367: 17.6398% ( 873) 00:12:20.102 3.367 - 3.381: 22.5763% ( 804) 00:12:20.102 3.381 - 3.395: 28.2127% ( 918) 00:12:20.102 3.395 - 3.409: 33.6526% ( 886) 00:12:20.103 3.409 - 3.423: 38.7364% ( 828) 00:12:20.103 3.423 - 3.437: 43.2308% ( 732) 00:12:20.103 3.437 - 3.450: 48.0199% ( 780) 00:12:20.103 3.450 - 3.464: 54.0185% ( 977) 00:12:20.103 3.464 - 3.478: 58.8568% ( 788) 00:12:20.103 3.478 - 3.492: 63.3573% ( 733) 00:12:20.103 3.492 - 3.506: 68.3244% ( 809) 00:12:20.103 3.506 - 3.520: 73.5065% ( 844) 00:12:20.103 3.520 - 3.534: 77.4606% ( 644) 00:12:20.103 3.534 - 3.548: 80.7884% ( 542) 00:12:20.103 3.548 - 3.562: 83.5636% ( 452) 00:12:20.103 3.562 - 3.590: 86.7563% ( 520) 00:12:20.103 3.590 - 3.617: 88.1746% ( 231) 00:12:20.103 3.617 - 3.645: 89.4763% ( 212) 00:12:20.103 3.645 - 3.673: 90.9376% ( 238) 00:12:20.103 3.673 - 3.701: 92.4111% ( 240) 00:12:20.103 3.701 - 3.729: 94.2408% ( 298) 00:12:20.103 3.729 - 3.757: 95.8249% ( 258) 00:12:20.103 3.757 - 3.784: 97.1695% ( 219) 00:12:20.103 3.784 - 3.812: 97.9984% ( 135) 00:12:20.103 3.812 - 3.840: 98.7229% ( 118) 00:12:20.103 3.840 - 3.868: 99.2080% ( 79) 00:12:20.103 3.868 - 3.896: 99.4720% ( 43) 00:12:20.103 3.896 - 3.923: 99.5702% ( 16) 00:12:20.103 3.923 - 3.951: 99.6009% ( 5) 00:12:20.103 3.951 - 3.979: 99.6132% ( 2) 00:12:20.103 3.979 - 4.007: 99.6193% ( 1) 00:12:20.103 4.118 - 4.146: 99.6255% ( 1) 00:12:20.103 5.176 - 5.203: 99.6316% ( 1) 00:12:20.103 5.231 - 5.259: 99.6377% ( 1) 00:12:20.103 5.315 - 5.343: 99.6500% ( 2) 00:12:20.103 5.343 - 5.370: 99.6562% ( 1) 00:12:20.103 5.482 - 5.510: 99.6684% ( 2) 00:12:20.103 5.510 - 5.537: 99.6746% ( 1) 00:12:20.103 5.983 - 6.010: 99.6869% ( 2) 00:12:20.103 6.317 - 6.344: 99.6991% ( 2) 00:12:20.103 6.428 - 6.456: 99.7053% ( 1) 00:12:20.103 6.483 - 6.511: 99.7114% ( 1) 00:12:20.103 6.650 - 6.678: 99.7237% ( 2) 00:12:20.103 6.845 - 6.873: 99.7298% ( 1) 00:12:20.103 6.901 - 6.929: 99.7360% ( 1) 00:12:20.103 6.957 - 6.984: 99.7483% ( 2) 00:12:20.103 7.012 - 7.040: 99.7544% ( 1) 00:12:20.103 7.123 - 7.179: 99.7605% ( 1) 00:12:20.103 7.235 - 7.290: 99.7790% ( 3) 00:12:20.103 7.346 - 7.402: 99.7912% ( 2) 00:12:20.103 7.402 - 7.457: 99.7974% ( 1) 00:12:20.103 7.457 - 7.513: 99.8035% ( 1) 00:12:20.103 7.513 - 7.569: 99.8219% ( 3) 00:12:20.103 7.624 - 7.680: 99.8281% ( 1) 00:12:20.103 7.680 - 7.736: 99.8342% ( 1) 00:12:20.103 7.736 - 7.791: 99.8404% ( 1) 00:12:20.103 7.903 - 7.958: 99.8465% ( 1) 00:12:20.103 8.070 - 8.125: 99.8526% ( 1) 00:12:20.103 8.125 - 8.181: 99.8588% ( 1) 00:12:20.103 8.181 - 8.237: 99.8649% ( 1) 00:12:20.103 8.237 - 8.292: 99.8772% ( 2) 00:12:20.103 8.459 - 8.515: 99.8833% ( 1) 00:12:20.103 8.515 - 8.570: 99.8895% ( 1) 00:12:20.103 8.737 - 8.793: 99.9018% ( 2) 00:12:20.103 8.849 - 8.904: 99.9079% ( 1) 00:12:20.103 9.016 - 9.071: 99.9140% ( 1) 00:12:20.103 9.461 - 9.517: 99.9202% ( 1) 00:12:20.103 9.683 - 9.739: 99.9263% ( 1) 00:12:20.103 3989.148 - 4017.642: 100.0000% ( 12) 00:12:20.103 00:12:20.103 Complete histogram 00:12:20.103 ================== 00:12:20.103 Ra[2024-07-15 23:38:09.001418] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.103 nge in us Cumulative Count 00:12:20.103 1.795 - 1.809: 0.0061% ( 1) 00:12:20.103 1.809 - 1.823: 0.0307% ( 4) 00:12:20.103 1.823 - 1.837: 0.5342% ( 82) 00:12:20.103 1.837 - 1.850: 1.8542% ( 215) 00:12:20.103 1.850 - 1.864: 2.9717% ( 182) 00:12:20.103 1.864 - 1.878: 6.5267% ( 579) 00:12:20.103 1.878 - 1.892: 49.3215% ( 6970) 00:12:20.103 1.892 - 1.906: 87.9781% ( 6296) 00:12:20.103 1.906 - 1.920: 94.3329% ( 1035) 00:12:20.103 1.920 - 1.934: 95.7819% ( 236) 00:12:20.103 1.934 - 1.948: 96.3345% ( 90) 00:12:20.103 1.948 - 1.962: 97.1450% ( 132) 00:12:20.103 1.962 - 1.976: 98.3729% ( 200) 00:12:20.103 1.976 - 1.990: 99.0238% ( 106) 00:12:20.103 1.990 - 2.003: 99.1711% ( 24) 00:12:20.103 2.003 - 2.017: 99.2018% ( 5) 00:12:20.103 2.017 - 2.031: 99.2387% ( 6) 00:12:20.103 2.031 - 2.045: 99.2448% ( 1) 00:12:20.103 2.045 - 2.059: 99.2755% ( 5) 00:12:20.103 2.059 - 2.073: 99.2939% ( 3) 00:12:20.103 2.073 - 2.087: 99.3001% ( 1) 00:12:20.103 2.087 - 2.101: 99.3062% ( 1) 00:12:20.103 2.101 - 2.115: 99.3123% ( 1) 00:12:20.103 2.115 - 2.129: 99.3185% ( 1) 00:12:20.103 2.129 - 2.143: 99.3246% ( 1) 00:12:20.103 2.170 - 2.184: 99.3308% ( 1) 00:12:20.103 2.184 - 2.198: 99.3369% ( 1) 00:12:20.103 2.212 - 2.226: 99.3430% ( 1) 00:12:20.103 2.254 - 2.268: 99.3492% ( 1) 00:12:20.103 3.757 - 3.784: 99.3553% ( 1) 00:12:20.103 3.812 - 3.840: 99.3615% ( 1) 00:12:20.103 3.840 - 3.868: 99.3676% ( 1) 00:12:20.103 3.951 - 3.979: 99.3737% ( 1) 00:12:20.103 3.979 - 4.007: 99.3799% ( 1) 00:12:20.103 4.035 - 4.063: 99.3860% ( 1) 00:12:20.103 4.118 - 4.146: 99.3922% ( 1) 00:12:20.103 4.230 - 4.257: 99.3983% ( 1) 00:12:20.103 4.341 - 4.369: 99.4044% ( 1) 00:12:20.103 4.397 - 4.424: 99.4106% ( 1) 00:12:20.103 4.508 - 4.536: 99.4167% ( 1) 00:12:20.103 5.009 - 5.037: 99.4290% ( 2) 00:12:20.103 5.037 - 5.064: 99.4351% ( 1) 00:12:20.103 5.148 - 5.176: 99.4413% ( 1) 00:12:20.103 5.176 - 5.203: 99.4474% ( 1) 00:12:20.103 5.231 - 5.259: 99.4536% ( 1) 00:12:20.103 5.370 - 5.398: 99.4597% ( 1) 00:12:20.103 5.565 - 5.593: 99.4658% ( 1) 00:12:20.103 5.760 - 5.788: 99.4720% ( 1) 00:12:20.103 5.788 - 5.816: 99.4781% ( 1) 00:12:20.103 5.871 - 5.899: 99.4843% ( 1) 00:12:20.103 5.955 - 5.983: 99.4904% ( 1) 00:12:20.103 6.010 - 6.038: 99.4965% ( 1) 00:12:20.103 6.539 - 6.567: 99.5027% ( 1) 00:12:20.103 6.706 - 6.734: 99.5088% ( 1) 00:12:20.103 6.845 - 6.873: 99.5150% ( 1) 00:12:20.103 7.235 - 7.290: 99.5272% ( 2) 00:12:20.103 7.346 - 7.402: 99.5334% ( 1) 00:12:20.103 7.624 - 7.680: 99.5395% ( 1) 00:12:20.103 7.847 - 7.903: 99.5456% ( 1) 00:12:20.103 8.237 - 8.292: 99.5518% ( 1) 00:12:20.103 8.403 - 8.459: 99.5579% ( 1) 00:12:20.103 3077.343 - 3091.590: 99.5641% ( 1) 00:12:20.103 3989.148 - 4017.642: 100.0000% ( 71) 00:12:20.103 00:12:20.103 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:20.103 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:20.103 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:20.103 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:20.103 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.362 [ 00:12:20.362 { 00:12:20.362 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.362 "subtype": "Discovery", 00:12:20.362 "listen_addresses": [], 00:12:20.362 "allow_any_host": true, 00:12:20.362 "hosts": [] 00:12:20.362 }, 00:12:20.362 { 00:12:20.362 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.362 "subtype": "NVMe", 00:12:20.362 "listen_addresses": [ 00:12:20.362 { 00:12:20.362 "trtype": "VFIOUSER", 00:12:20.362 "adrfam": "IPv4", 00:12:20.362 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.362 "trsvcid": "0" 00:12:20.362 } 00:12:20.362 ], 00:12:20.362 "allow_any_host": true, 00:12:20.362 "hosts": [], 00:12:20.363 "serial_number": "SPDK1", 00:12:20.363 "model_number": "SPDK bdev Controller", 00:12:20.363 "max_namespaces": 32, 00:12:20.363 "min_cntlid": 1, 00:12:20.363 "max_cntlid": 65519, 00:12:20.363 "namespaces": [ 00:12:20.363 { 00:12:20.363 "nsid": 1, 00:12:20.363 "bdev_name": "Malloc1", 00:12:20.363 "name": "Malloc1", 00:12:20.363 "nguid": "9D0BB68AC942417688571222067BA629", 00:12:20.363 "uuid": "9d0bb68a-c942-4176-8857-1222067ba629" 00:12:20.363 }, 00:12:20.363 { 00:12:20.363 "nsid": 2, 00:12:20.363 "bdev_name": "Malloc3", 00:12:20.363 "name": "Malloc3", 00:12:20.363 "nguid": "C1A95CBF07614927A13B8C9D6474E069", 00:12:20.363 "uuid": "c1a95cbf-0761-4927-a13b-8c9d6474e069" 00:12:20.363 } 00:12:20.363 ] 00:12:20.363 }, 00:12:20.363 { 00:12:20.363 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.363 "subtype": "NVMe", 00:12:20.363 "listen_addresses": [ 00:12:20.363 { 00:12:20.363 "trtype": "VFIOUSER", 00:12:20.363 "adrfam": "IPv4", 00:12:20.363 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.363 "trsvcid": "0" 00:12:20.363 } 00:12:20.363 ], 00:12:20.363 "allow_any_host": true, 00:12:20.363 "hosts": [], 00:12:20.363 "serial_number": "SPDK2", 00:12:20.363 "model_number": "SPDK bdev Controller", 00:12:20.363 "max_namespaces": 32, 00:12:20.363 "min_cntlid": 1, 00:12:20.363 "max_cntlid": 65519, 00:12:20.363 "namespaces": [ 00:12:20.363 { 00:12:20.363 "nsid": 1, 00:12:20.363 "bdev_name": "Malloc2", 00:12:20.363 "name": "Malloc2", 00:12:20.363 "nguid": "EBEC905C162A44A294A738986E4B9F04", 00:12:20.363 "uuid": "ebec905c-162a-44a2-94a7-38986e4b9f04" 00:12:20.363 } 00:12:20.363 ] 00:12:20.363 } 00:12:20.363 ] 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=935716 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1259 -- # local i=0 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # return 0 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:20.363 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:20.621 [2024-07-15 23:38:09.363902] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.621 Malloc4 00:12:20.621 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:20.878 [2024-07-15 23:38:09.600674] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.878 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.878 Asynchronous Event Request test 00:12:20.878 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.878 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:20.878 Registering asynchronous event callbacks... 00:12:20.878 Starting namespace attribute notice tests for all controllers... 00:12:20.878 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:20.878 aer_cb - Changed Namespace 00:12:20.878 Cleaning up... 00:12:20.878 [ 00:12:20.878 { 00:12:20.878 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.878 "subtype": "Discovery", 00:12:20.878 "listen_addresses": [], 00:12:20.878 "allow_any_host": true, 00:12:20.878 "hosts": [] 00:12:20.878 }, 00:12:20.878 { 00:12:20.878 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.878 "subtype": "NVMe", 00:12:20.878 "listen_addresses": [ 00:12:20.878 { 00:12:20.878 "trtype": "VFIOUSER", 00:12:20.878 "adrfam": "IPv4", 00:12:20.878 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.878 "trsvcid": "0" 00:12:20.878 } 00:12:20.878 ], 00:12:20.879 "allow_any_host": true, 00:12:20.879 "hosts": [], 00:12:20.879 "serial_number": "SPDK1", 00:12:20.879 "model_number": "SPDK bdev Controller", 00:12:20.879 "max_namespaces": 32, 00:12:20.879 "min_cntlid": 1, 00:12:20.879 "max_cntlid": 65519, 00:12:20.879 "namespaces": [ 00:12:20.879 { 00:12:20.879 "nsid": 1, 00:12:20.879 "bdev_name": "Malloc1", 00:12:20.879 "name": "Malloc1", 00:12:20.879 "nguid": "9D0BB68AC942417688571222067BA629", 00:12:20.879 "uuid": "9d0bb68a-c942-4176-8857-1222067ba629" 00:12:20.879 }, 00:12:20.879 { 00:12:20.879 "nsid": 2, 00:12:20.879 "bdev_name": "Malloc3", 00:12:20.879 "name": "Malloc3", 00:12:20.879 "nguid": "C1A95CBF07614927A13B8C9D6474E069", 00:12:20.879 "uuid": "c1a95cbf-0761-4927-a13b-8c9d6474e069" 00:12:20.879 } 00:12:20.879 ] 00:12:20.879 }, 00:12:20.879 { 00:12:20.879 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.879 "subtype": "NVMe", 00:12:20.879 "listen_addresses": [ 00:12:20.879 { 00:12:20.879 "trtype": "VFIOUSER", 00:12:20.879 "adrfam": "IPv4", 00:12:20.879 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.879 "trsvcid": "0" 00:12:20.879 } 00:12:20.879 ], 00:12:20.879 "allow_any_host": true, 00:12:20.879 "hosts": [], 00:12:20.879 "serial_number": "SPDK2", 00:12:20.879 "model_number": "SPDK bdev Controller", 00:12:20.879 "max_namespaces": 32, 00:12:20.879 "min_cntlid": 1, 00:12:20.879 "max_cntlid": 65519, 00:12:20.879 "namespaces": [ 00:12:20.879 { 00:12:20.879 "nsid": 1, 00:12:20.879 "bdev_name": "Malloc2", 00:12:20.879 "name": "Malloc2", 00:12:20.879 "nguid": "EBEC905C162A44A294A738986E4B9F04", 00:12:20.879 "uuid": "ebec905c-162a-44a2-94a7-38986e4b9f04" 00:12:20.879 }, 00:12:20.879 { 00:12:20.879 "nsid": 2, 00:12:20.879 "bdev_name": "Malloc4", 00:12:20.879 "name": "Malloc4", 00:12:20.879 "nguid": "39898996272649B6B6E75D10FE9A56A7", 00:12:20.879 "uuid": "39898996-2726-49b6-b6e7-5d10fe9a56a7" 00:12:20.879 } 00:12:20.879 ] 00:12:20.879 } 00:12:20.879 ] 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 935716 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 928088 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 928088 ']' 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 928088 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 928088 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 928088' 00:12:20.879 killing process with pid 928088 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 928088 00:12:20.879 23:38:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 928088 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=935952 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 935952' 00:12:21.138 Process pid: 935952 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 935952 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@823 -- # '[' -z 935952 ']' 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:21.138 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:21.397 [2024-07-15 23:38:10.146555] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:21.397 [2024-07-15 23:38:10.147457] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:21.397 [2024-07-15 23:38:10.147494] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.397 [2024-07-15 23:38:10.204013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.397 [2024-07-15 23:38:10.283978] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.397 [2024-07-15 23:38:10.284015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.397 [2024-07-15 23:38:10.284023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.397 [2024-07-15 23:38:10.284029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.397 [2024-07-15 23:38:10.284033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.397 [2024-07-15 23:38:10.284092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.397 [2024-07-15 23:38:10.284109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.397 [2024-07-15 23:38:10.284202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.397 [2024-07-15 23:38:10.284203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.397 [2024-07-15 23:38:10.362933] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:21.397 [2024-07-15 23:38:10.363094] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:21.397 [2024-07-15 23:38:10.363326] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:21.397 [2024-07-15 23:38:10.363670] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:21.397 [2024-07-15 23:38:10.363921] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:22.333 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:22.333 23:38:10 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # return 0 00:12:22.333 23:38:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:23.269 23:38:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:23.269 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:23.269 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:23.269 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.269 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:23.269 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:23.528 Malloc1 00:12:23.528 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:23.787 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:23.787 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:24.045 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:24.045 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:24.045 23:38:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.304 Malloc2 00:12:24.304 23:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:24.304 23:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:24.562 23:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 935952 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@942 -- # '[' -z 935952 ']' 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # kill -0 935952 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # uname 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 935952 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@960 -- # echo 'killing process with pid 935952' 00:12:24.821 killing process with pid 935952 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@961 -- # kill 935952 00:12:24.821 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # wait 935952 00:12:25.080 23:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:25.080 23:38:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.080 00:12:25.080 real 0m51.280s 00:12:25.080 user 3m22.885s 00:12:25.080 sys 0m3.615s 00:12:25.080 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:25.080 23:38:13 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 ************************************ 00:12:25.080 END TEST nvmf_vfio_user 00:12:25.080 ************************************ 00:12:25.080 23:38:13 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:25.080 23:38:13 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.080 23:38:13 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:25.080 23:38:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:25.080 23:38:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 ************************************ 00:12:25.080 START TEST nvmf_vfio_user_nvme_compliance 00:12:25.080 ************************************ 00:12:25.080 23:38:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:25.080 * Looking for test storage... 00:12:25.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.080 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.081 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=936688 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 936688' 00:12:25.341 Process pid: 936688 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 936688 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@823 -- # '[' -z 936688 ']' 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:25.341 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:25.341 [2024-07-15 23:38:14.108351] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:25.341 [2024-07-15 23:38:14.108421] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.341 [2024-07-15 23:38:14.162214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:25.341 [2024-07-15 23:38:14.241230] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.341 [2024-07-15 23:38:14.241264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.341 [2024-07-15 23:38:14.241271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.341 [2024-07-15 23:38:14.241277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.341 [2024-07-15 23:38:14.241283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.341 [2024-07-15 23:38:14.241335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.341 [2024-07-15 23:38:14.241435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.341 [2024-07-15 23:38:14.241437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.279 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:26.279 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # return 0 00:12:26.279 23:38:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.216 malloc0 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:27.216 23:38:15 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:27.216 00:12:27.216 00:12:27.216 CUnit - A unit testing framework for C - Version 2.1-3 00:12:27.216 http://cunit.sourceforge.net/ 00:12:27.216 00:12:27.216 00:12:27.216 Suite: nvme_compliance 00:12:27.216 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 23:38:16.127629] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.216 [2024-07-15 23:38:16.128962] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:27.216 [2024-07-15 23:38:16.128976] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:27.216 [2024-07-15 23:38:16.128981] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:27.216 [2024-07-15 23:38:16.131656] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.216 passed 00:12:27.476 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 23:38:16.211179] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.476 [2024-07-15 23:38:16.214196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.476 passed 00:12:27.476 Test: admin_identify_ns ...[2024-07-15 23:38:16.292926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.476 [2024-07-15 23:38:16.353237] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:27.476 [2024-07-15 23:38:16.361233] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:27.476 [2024-07-15 23:38:16.382331] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.476 passed 00:12:27.735 Test: admin_get_features_mandatory_features ...[2024-07-15 23:38:16.457514] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.735 [2024-07-15 23:38:16.460534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.735 passed 00:12:27.735 Test: admin_get_features_optional_features ...[2024-07-15 23:38:16.539071] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.735 [2024-07-15 23:38:16.542088] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.735 passed 00:12:27.735 Test: admin_set_features_number_of_queues ...[2024-07-15 23:38:16.619863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.995 [2024-07-15 23:38:16.725309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.995 passed 00:12:27.995 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 23:38:16.802318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.995 [2024-07-15 23:38:16.805339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:27.995 passed 00:12:27.995 Test: admin_get_log_page_with_lpo ...[2024-07-15 23:38:16.879173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:27.995 [2024-07-15 23:38:16.948244] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:27.995 [2024-07-15 23:38:16.961303] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.254 passed 00:12:28.254 Test: fabric_property_get ...[2024-07-15 23:38:17.036424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.254 [2024-07-15 23:38:17.040442] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:28.254 [2024-07-15 23:38:17.042454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.254 passed 00:12:28.254 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 23:38:17.118968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.254 [2024-07-15 23:38:17.120219] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:28.254 [2024-07-15 23:38:17.125016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.254 passed 00:12:28.254 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 23:38:17.202866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.513 [2024-07-15 23:38:17.287235] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.513 [2024-07-15 23:38:17.303233] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.513 [2024-07-15 23:38:17.308319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.513 passed 00:12:28.513 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 23:38:17.381460] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.513 [2024-07-15 23:38:17.382698] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:28.513 [2024-07-15 23:38:17.384479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.513 passed 00:12:28.513 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 23:38:17.462243] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.773 [2024-07-15 23:38:17.540244] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:28.773 [2024-07-15 23:38:17.564232] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:28.773 [2024-07-15 23:38:17.569306] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.773 passed 00:12:28.773 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 23:38:17.645388] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:28.773 [2024-07-15 23:38:17.646620] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:28.773 [2024-07-15 23:38:17.646644] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:28.773 [2024-07-15 23:38:17.648415] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:28.773 passed 00:12:28.773 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 23:38:17.726207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.032 [2024-07-15 23:38:17.819233] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:29.032 [2024-07-15 23:38:17.827229] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:29.032 [2024-07-15 23:38:17.835235] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:29.032 [2024-07-15 23:38:17.843234] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:29.032 [2024-07-15 23:38:17.872309] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.032 passed 00:12:29.032 Test: admin_create_io_sq_verify_pc ...[2024-07-15 23:38:17.946264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:29.032 [2024-07-15 23:38:17.966239] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:29.032 [2024-07-15 23:38:17.983498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:29.329 passed 00:12:29.329 Test: admin_create_io_qp_max_qps ...[2024-07-15 23:38:18.059983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.281 [2024-07-15 23:38:19.168255] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:30.850 [2024-07-15 23:38:19.549145] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.850 passed 00:12:30.850 Test: admin_create_io_sq_shared_cq ...[2024-07-15 23:38:19.626127] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:30.850 [2024-07-15 23:38:19.756232] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:30.850 [2024-07-15 23:38:19.793305] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:30.850 passed 00:12:30.850 00:12:30.850 Run Summary: Type Total Ran Passed Failed Inactive 00:12:30.850 suites 1 1 n/a 0 0 00:12:30.850 tests 18 18 18 0 0 00:12:30.850 asserts 360 360 360 0 n/a 00:12:30.850 00:12:30.850 Elapsed time = 1.506 seconds 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 936688 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@942 -- # '[' -z 936688 ']' 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # kill -0 936688 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # uname 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 936688 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # echo 'killing process with pid 936688' 00:12:31.110 killing process with pid 936688 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@961 -- # kill 936688 00:12:31.110 23:38:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # wait 936688 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:31.370 00:12:31.370 real 0m6.150s 00:12:31.370 user 0m17.555s 00:12:31.370 sys 0m0.452s 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:31.370 ************************************ 00:12:31.370 END TEST nvmf_vfio_user_nvme_compliance 00:12:31.370 ************************************ 00:12:31.370 23:38:20 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:12:31.370 23:38:20 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.370 23:38:20 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:31.370 23:38:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:31.370 23:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:31.370 ************************************ 00:12:31.370 START TEST nvmf_vfio_user_fuzz 00:12:31.370 ************************************ 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:31.370 * Looking for test storage... 00:12:31.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=937701 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 937701' 00:12:31.370 Process pid: 937701 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 937701 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@823 -- # '[' -z 937701 ']' 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:31.370 23:38:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:32.309 23:38:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:32.309 23:38:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # return 0 00:12:32.309 23:38:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.244 malloc0 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:33.244 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:33.245 23:38:22 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:05.403 Fuzzing completed. Shutting down the fuzz application 00:13:05.403 00:13:05.403 Dumping successful admin opcodes: 00:13:05.403 8, 9, 10, 24, 00:13:05.403 Dumping successful io opcodes: 00:13:05.403 0, 00:13:05.403 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1002033, total successful commands: 3929, random_seed: 3950613632 00:13:05.403 NS: 0x200003a1ef00 admin qp, Total commands completed: 248463, total successful commands: 2009, random_seed: 458493184 00:13:05.403 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:05.403 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:05.403 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:05.403 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:05.403 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 937701 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@942 -- # '[' -z 937701 ']' 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # kill -0 937701 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # uname 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 937701 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # echo 'killing process with pid 937701' 00:13:05.404 killing process with pid 937701 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@961 -- # kill 937701 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # wait 937701 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:05.404 00:13:05.404 real 0m32.791s 00:13:05.404 user 0m30.460s 00:13:05.404 sys 0m31.039s 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:05.404 23:38:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:05.404 ************************************ 00:13:05.404 END TEST nvmf_vfio_user_fuzz 00:13:05.404 ************************************ 00:13:05.404 23:38:52 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:05.404 23:38:52 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:05.404 23:38:52 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:05.404 23:38:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:05.404 23:38:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:05.404 ************************************ 00:13:05.404 START TEST nvmf_host_management 00:13:05.404 ************************************ 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:05.404 * Looking for test storage... 00:13:05.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:05.404 23:38:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:09.602 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:09.602 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:09.602 Found net devices under 0000:86:00.0: cvl_0_0 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:09.602 Found net devices under 0000:86:00.1: cvl_0_1 00:13:09.602 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:09.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:13:09.603 00:13:09.603 --- 10.0.0.2 ping statistics --- 00:13:09.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.603 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:13:09.603 00:13:09.603 --- 10.0.0.1 ping statistics --- 00:13:09.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.603 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=946116 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 946116 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 946116 ']' 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:09.603 23:38:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:09.603 [2024-07-15 23:38:58.423691] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:09.603 [2024-07-15 23:38:58.423734] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.603 [2024-07-15 23:38:58.480262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.603 [2024-07-15 23:38:58.560811] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.603 [2024-07-15 23:38:58.560848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.603 [2024-07-15 23:38:58.560855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.603 [2024-07-15 23:38:58.560862] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.603 [2024-07-15 23:38:58.560867] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.603 [2024-07-15 23:38:58.560966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.603 [2024-07-15 23:38:58.560991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.603 [2024-07-15 23:38:58.561106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.603 [2024-07-15 23:38:58.561108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.541 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 [2024-07-15 23:38:59.280255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 Malloc0 00:13:10.542 [2024-07-15 23:38:59.340138] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=946268 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 946268 /var/tmp/bdevperf.sock 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 946268 ']' 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:10.542 { 00:13:10.542 "params": { 00:13:10.542 "name": "Nvme$subsystem", 00:13:10.542 "trtype": "$TEST_TRANSPORT", 00:13:10.542 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:10.542 "adrfam": "ipv4", 00:13:10.542 "trsvcid": "$NVMF_PORT", 00:13:10.542 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:10.542 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:10.542 "hdgst": ${hdgst:-false}, 00:13:10.542 "ddgst": ${ddgst:-false} 00:13:10.542 }, 00:13:10.542 "method": "bdev_nvme_attach_controller" 00:13:10.542 } 00:13:10.542 EOF 00:13:10.542 )") 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:10.542 23:38:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:10.542 "params": { 00:13:10.542 "name": "Nvme0", 00:13:10.542 "trtype": "tcp", 00:13:10.542 "traddr": "10.0.0.2", 00:13:10.542 "adrfam": "ipv4", 00:13:10.542 "trsvcid": "4420", 00:13:10.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:10.542 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:10.542 "hdgst": false, 00:13:10.542 "ddgst": false 00:13:10.542 }, 00:13:10.542 "method": "bdev_nvme_attach_controller" 00:13:10.542 }' 00:13:10.542 [2024-07-15 23:38:59.433073] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:10.542 [2024-07-15 23:38:59.433119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946268 ] 00:13:10.542 [2024-07-15 23:38:59.487928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.802 [2024-07-15 23:38:59.561489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.802 Running I/O for 10 seconds... 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=972 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 972 -ge 100 ']' 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.372 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.372 [2024-07-15 23:39:00.320895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.372 [2024-07-15 23:39:00.320932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.320942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.372 [2024-07-15 23:39:00.320949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.320957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.372 [2024-07-15 23:39:00.321061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.372 [2024-07-15 23:39:00.321075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215a980 is same with the state(5) to be set 00:13:11.372 [2024-07-15 23:39:00.321802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.321980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.372 [2024-07-15 23:39:00.321990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.372 [2024-07-15 23:39:00.322001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.373 [2024-07-15 23:39:00.322723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.373 [2024-07-15 23:39:00.322733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.322934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:11.374 [2024-07-15 23:39:00.322942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.374 [2024-07-15 23:39:00.323005] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x256bb20 was disconnected and freed. reset controller. 00:13:11.374 [2024-07-15 23:39:00.323905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:11.374 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.374 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:11.374 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:11.374 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:11.374 task offset: 1152 on job bdev=Nvme0n1 fails 00:13:11.374 00:13:11.374 Latency(us) 00:13:11.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.374 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:11.374 Job: Nvme0n1 ended in about 0.61 seconds with error 00:13:11.374 Verification LBA range: start 0x0 length 0x400 00:13:11.374 Nvme0n1 : 0.61 1700.57 106.29 105.36 0.00 34732.50 1524.42 34876.55 00:13:11.374 =================================================================================================================== 00:13:11.374 Total : 1700.57 106.29 105.36 0.00 34732.50 1524.42 34876.55 00:13:11.374 [2024-07-15 23:39:00.325499] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:11.374 [2024-07-15 23:39:00.325514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215a980 (9): Bad file descriptor 00:13:11.374 23:39:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:11.374 23:39:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:11.634 [2024-07-15 23:39:00.377403] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 946268 00:13:12.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (946268) - No such process 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:12.571 { 00:13:12.571 "params": { 00:13:12.571 "name": "Nvme$subsystem", 00:13:12.571 "trtype": "$TEST_TRANSPORT", 00:13:12.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:12.571 "adrfam": "ipv4", 00:13:12.571 "trsvcid": "$NVMF_PORT", 00:13:12.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:12.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:12.571 "hdgst": ${hdgst:-false}, 00:13:12.571 "ddgst": ${ddgst:-false} 00:13:12.571 }, 00:13:12.571 "method": "bdev_nvme_attach_controller" 00:13:12.571 } 00:13:12.571 EOF 00:13:12.571 )") 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:12.571 23:39:01 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:12.571 "params": { 00:13:12.571 "name": "Nvme0", 00:13:12.571 "trtype": "tcp", 00:13:12.571 "traddr": "10.0.0.2", 00:13:12.571 "adrfam": "ipv4", 00:13:12.571 "trsvcid": "4420", 00:13:12.571 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:12.571 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:12.571 "hdgst": false, 00:13:12.571 "ddgst": false 00:13:12.571 }, 00:13:12.571 "method": "bdev_nvme_attach_controller" 00:13:12.571 }' 00:13:12.571 [2024-07-15 23:39:01.385521] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:12.571 [2024-07-15 23:39:01.385575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946812 ] 00:13:12.571 [2024-07-15 23:39:01.440360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.571 [2024-07-15 23:39:01.511976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.831 Running I/O for 1 seconds... 00:13:14.209 00:13:14.209 Latency(us) 00:13:14.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.209 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:14.209 Verification LBA range: start 0x0 length 0x400 00:13:14.209 Nvme0n1 : 1.01 1579.99 98.75 0.00 0.00 39933.01 8263.23 35104.50 00:13:14.209 =================================================================================================================== 00:13:14.209 Total : 1579.99 98.75 0.00 0.00 39933.01 8263.23 35104.50 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:14.209 23:39:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:14.209 rmmod nvme_tcp 00:13:14.209 rmmod nvme_fabrics 00:13:14.209 rmmod nvme_keyring 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 946116 ']' 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 946116 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@942 -- # '[' -z 946116 ']' 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # kill -0 946116 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # uname 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 946116 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@960 -- # echo 'killing process with pid 946116' 00:13:14.209 killing process with pid 946116 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@961 -- # kill 946116 00:13:14.209 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # wait 946116 00:13:14.468 [2024-07-15 23:39:03.241277] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:14.468 23:39:03 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.375 23:39:05 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:16.375 23:39:05 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:16.375 00:13:16.375 real 0m12.314s 00:13:16.375 user 0m22.648s 00:13:16.375 sys 0m5.057s 00:13:16.375 23:39:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:16.375 23:39:05 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:16.375 ************************************ 00:13:16.375 END TEST nvmf_host_management 00:13:16.375 ************************************ 00:13:16.634 23:39:05 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:16.634 23:39:05 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:16.634 23:39:05 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:16.634 23:39:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:16.634 23:39:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:16.634 ************************************ 00:13:16.634 START TEST nvmf_lvol 00:13:16.634 ************************************ 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:16.634 * Looking for test storage... 00:13:16.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:16.634 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:16.635 23:39:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:16.635 23:39:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:21.908 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:21.908 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:21.908 Found net devices under 0000:86:00.0: cvl_0_0 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:21.908 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:21.909 Found net devices under 0000:86:00.1: cvl_0_1 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:21.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:13:21.909 00:13:21.909 --- 10.0.0.2 ping statistics --- 00:13:21.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.909 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:13:21.909 00:13:21.909 --- 10.0.0.1 ping statistics --- 00:13:21.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.909 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=950793 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 950793 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@823 -- # '[' -z 950793 ']' 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:21.909 23:39:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:21.909 [2024-07-15 23:39:10.798867] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:21.909 [2024-07-15 23:39:10.798910] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.909 [2024-07-15 23:39:10.855596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.168 [2024-07-15 23:39:10.935242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.168 [2024-07-15 23:39:10.935278] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.168 [2024-07-15 23:39:10.935285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.168 [2024-07-15 23:39:10.935291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.168 [2024-07-15 23:39:10.935297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.168 [2024-07-15 23:39:10.935336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.168 [2024-07-15 23:39:10.935429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.168 [2024-07-15 23:39:10.935430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.737 23:39:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:22.737 23:39:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # return 0 00:13:22.737 23:39:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.737 23:39:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.737 23:39:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:22.737 23:39:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.737 23:39:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:22.995 [2024-07-15 23:39:11.788054] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.995 23:39:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.252 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:23.252 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:23.252 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:23.252 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:23.511 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:23.770 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8300b5b9-7a17-4492-8985-5d5ffedfa1ea 00:13:23.770 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8300b5b9-7a17-4492-8985-5d5ffedfa1ea lvol 20 00:13:24.030 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2910515f-e050-40b3-bb39-5a04fbf3ac38 00:13:24.030 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:24.030 23:39:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2910515f-e050-40b3-bb39-5a04fbf3ac38 00:13:24.289 23:39:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:24.289 [2024-07-15 23:39:13.234608] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.547 23:39:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:24.547 23:39:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=951278 00:13:24.547 23:39:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:24.547 23:39:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:25.484 23:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2910515f-e050-40b3-bb39-5a04fbf3ac38 MY_SNAPSHOT 00:13:25.743 23:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d7b4ccd4-ab62-4380-9334-ad3f0ba2e00b 00:13:25.743 23:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2910515f-e050-40b3-bb39-5a04fbf3ac38 30 00:13:26.002 23:39:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d7b4ccd4-ab62-4380-9334-ad3f0ba2e00b MY_CLONE 00:13:26.261 23:39:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=eaeb1de4-df14-4b46-9c3c-566b42c21150 00:13:26.261 23:39:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate eaeb1de4-df14-4b46-9c3c-566b42c21150 00:13:26.518 23:39:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 951278 00:13:36.496 Initializing NVMe Controllers 00:13:36.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:36.496 Controller IO queue size 128, less than required. 00:13:36.496 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:36.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:36.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:36.496 Initialization complete. Launching workers. 00:13:36.496 ======================================================== 00:13:36.496 Latency(us) 00:13:36.496 Device Information : IOPS MiB/s Average min max 00:13:36.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12155.40 47.48 10539.05 1697.75 48752.68 00:13:36.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11922.10 46.57 10744.19 3540.14 58481.47 00:13:36.496 ======================================================== 00:13:36.496 Total : 24077.50 94.05 10640.62 1697.75 58481.47 00:13:36.496 00:13:36.496 23:39:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2910515f-e050-40b3-bb39-5a04fbf3ac38 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8300b5b9-7a17-4492-8985-5d5ffedfa1ea 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:36.496 rmmod nvme_tcp 00:13:36.496 rmmod nvme_fabrics 00:13:36.496 rmmod nvme_keyring 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 950793 ']' 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 950793 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@942 -- # '[' -z 950793 ']' 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # kill -0 950793 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # uname 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 950793 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@960 -- # echo 'killing process with pid 950793' 00:13:36.496 killing process with pid 950793 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@961 -- # kill 950793 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # wait 950793 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.496 23:39:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:37.872 23:39:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:38.131 00:13:38.131 real 0m21.449s 00:13:38.131 user 1m4.072s 00:13:38.131 sys 0m6.594s 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 ************************************ 00:13:38.131 END TEST nvmf_lvol 00:13:38.131 ************************************ 00:13:38.131 23:39:26 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:13:38.131 23:39:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:38.131 23:39:26 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:38.131 23:39:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:38.131 23:39:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.131 ************************************ 00:13:38.131 START TEST nvmf_lvs_grow 00:13:38.131 ************************************ 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:38.131 * Looking for test storage... 00:13:38.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.131 23:39:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:38.131 23:39:27 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:38.132 23:39:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:43.448 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:43.448 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:43.448 Found net devices under 0000:86:00.0: cvl_0_0 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:43.448 Found net devices under 0000:86:00.1: cvl_0_1 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:43.448 23:39:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.448 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.448 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:43.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:13:43.449 00:13:43.449 --- 10.0.0.2 ping statistics --- 00:13:43.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.449 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:13:43.449 00:13:43.449 --- 10.0.0.1 ping statistics --- 00:13:43.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.449 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=956632 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 956632 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # '[' -z 956632 ']' 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:43.449 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:43.449 [2024-07-15 23:39:32.147295] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:43.449 [2024-07-15 23:39:32.147335] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.449 [2024-07-15 23:39:32.202147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.449 [2024-07-15 23:39:32.282307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.449 [2024-07-15 23:39:32.282342] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.449 [2024-07-15 23:39:32.282349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.449 [2024-07-15 23:39:32.282355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.449 [2024-07-15 23:39:32.282361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.449 [2024-07-15 23:39:32.282378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.086 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:44.086 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # return 0 00:13:44.086 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.086 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.086 23:39:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.086 23:39:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.086 23:39:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:44.345 [2024-07-15 23:39:33.154510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:44.345 ************************************ 00:13:44.345 START TEST lvs_grow_clean 00:13:44.345 ************************************ 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1117 -- # lvs_grow 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:44.345 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.604 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:44.604 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:44.862 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:44.862 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:44.862 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:44.862 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:44.862 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:44.863 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cdbce982-755c-4f8d-bd89-f4480a1d86ef lvol 150 00:13:45.121 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e379ef7-acfe-4d64-90a5-a2b0d27d1222 00:13:45.121 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:45.121 23:39:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:45.121 [2024-07-15 23:39:34.078478] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:45.121 [2024-07-15 23:39:34.078528] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:45.121 true 00:13:45.381 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:45.381 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:45.381 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:45.381 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:45.640 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e379ef7-acfe-4d64-90a5-a2b0d27d1222 00:13:45.640 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:45.900 [2024-07-15 23:39:34.764509] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.900 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=957138 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 957138 /var/tmp/bdevperf.sock 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@823 -- # '[' -z 957138 ']' 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:46.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:46.160 23:39:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:46.160 [2024-07-15 23:39:34.960598] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:46.160 [2024-07-15 23:39:34.960640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957138 ] 00:13:46.160 [2024-07-15 23:39:35.012774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.160 [2024-07-15 23:39:35.091015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.419 23:39:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:46.419 23:39:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # return 0 00:13:46.419 23:39:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:46.678 Nvme0n1 00:13:46.678 23:39:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:46.678 [ 00:13:46.678 { 00:13:46.678 "name": "Nvme0n1", 00:13:46.678 "aliases": [ 00:13:46.678 "3e379ef7-acfe-4d64-90a5-a2b0d27d1222" 00:13:46.678 ], 00:13:46.678 "product_name": "NVMe disk", 00:13:46.678 "block_size": 4096, 00:13:46.678 "num_blocks": 38912, 00:13:46.678 "uuid": "3e379ef7-acfe-4d64-90a5-a2b0d27d1222", 00:13:46.678 "assigned_rate_limits": { 00:13:46.678 "rw_ios_per_sec": 0, 00:13:46.678 "rw_mbytes_per_sec": 0, 00:13:46.678 "r_mbytes_per_sec": 0, 00:13:46.678 "w_mbytes_per_sec": 0 00:13:46.678 }, 00:13:46.678 "claimed": false, 00:13:46.678 "zoned": false, 00:13:46.678 "supported_io_types": { 00:13:46.678 "read": true, 00:13:46.678 "write": true, 00:13:46.678 "unmap": true, 00:13:46.678 "flush": true, 00:13:46.678 "reset": true, 00:13:46.678 "nvme_admin": true, 00:13:46.678 "nvme_io": true, 00:13:46.678 "nvme_io_md": false, 00:13:46.678 "write_zeroes": true, 00:13:46.678 "zcopy": false, 00:13:46.678 "get_zone_info": false, 00:13:46.678 "zone_management": false, 00:13:46.678 "zone_append": false, 00:13:46.678 "compare": true, 00:13:46.678 "compare_and_write": true, 00:13:46.679 "abort": true, 00:13:46.679 "seek_hole": false, 00:13:46.679 "seek_data": false, 00:13:46.679 "copy": true, 00:13:46.679 "nvme_iov_md": false 00:13:46.679 }, 00:13:46.679 "memory_domains": [ 00:13:46.679 { 00:13:46.679 "dma_device_id": "system", 00:13:46.679 "dma_device_type": 1 00:13:46.679 } 00:13:46.679 ], 00:13:46.679 "driver_specific": { 00:13:46.679 "nvme": [ 00:13:46.679 { 00:13:46.679 "trid": { 00:13:46.679 "trtype": "TCP", 00:13:46.679 "adrfam": "IPv4", 00:13:46.679 "traddr": "10.0.0.2", 00:13:46.679 "trsvcid": "4420", 00:13:46.679 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:46.679 }, 00:13:46.679 "ctrlr_data": { 00:13:46.679 "cntlid": 1, 00:13:46.679 "vendor_id": "0x8086", 00:13:46.679 "model_number": "SPDK bdev Controller", 00:13:46.679 "serial_number": "SPDK0", 00:13:46.679 "firmware_revision": "24.09", 00:13:46.679 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:46.679 "oacs": { 00:13:46.679 "security": 0, 00:13:46.679 "format": 0, 00:13:46.679 "firmware": 0, 00:13:46.679 "ns_manage": 0 00:13:46.679 }, 00:13:46.679 "multi_ctrlr": true, 00:13:46.679 "ana_reporting": false 00:13:46.679 }, 00:13:46.679 "vs": { 00:13:46.679 "nvme_version": "1.3" 00:13:46.679 }, 00:13:46.679 "ns_data": { 00:13:46.679 "id": 1, 00:13:46.679 "can_share": true 00:13:46.679 } 00:13:46.679 } 00:13:46.679 ], 00:13:46.679 "mp_policy": "active_passive" 00:13:46.679 } 00:13:46.679 } 00:13:46.679 ] 00:13:46.679 23:39:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=957152 00:13:46.679 23:39:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:46.679 23:39:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:46.938 Running I/O for 10 seconds... 00:13:47.876 Latency(us) 00:13:47.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:47.876 Nvme0n1 : 1.00 22995.00 89.82 0.00 0.00 0.00 0.00 0.00 00:13:47.876 =================================================================================================================== 00:13:47.876 Total : 22995.00 89.82 0.00 0.00 0.00 0.00 0.00 00:13:47.876 00:13:48.816 23:39:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:48.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:48.816 Nvme0n1 : 2.00 23081.50 90.16 0.00 0.00 0.00 0.00 0.00 00:13:48.816 =================================================================================================================== 00:13:48.816 Total : 23081.50 90.16 0.00 0.00 0.00 0.00 0.00 00:13:48.816 00:13:48.816 true 00:13:48.816 23:39:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:48.816 23:39:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:49.075 23:39:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:49.075 23:39:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:49.075 23:39:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 957152 00:13:50.012 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.012 Nvme0n1 : 3.00 23089.33 90.19 0.00 0.00 0.00 0.00 0.00 00:13:50.012 =================================================================================================================== 00:13:50.012 Total : 23089.33 90.19 0.00 0.00 0.00 0.00 0.00 00:13:50.012 00:13:50.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:50.950 Nvme0n1 : 4.00 23109.00 90.27 0.00 0.00 0.00 0.00 0.00 00:13:50.950 =================================================================================================================== 00:13:50.950 Total : 23109.00 90.27 0.00 0.00 0.00 0.00 0.00 00:13:50.950 00:13:51.889 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:51.889 Nvme0n1 : 5.00 23095.20 90.22 0.00 0.00 0.00 0.00 0.00 00:13:51.889 =================================================================================================================== 00:13:51.889 Total : 23095.20 90.22 0.00 0.00 0.00 0.00 0.00 00:13:51.889 00:13:52.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:52.828 Nvme0n1 : 6.00 23128.67 90.35 0.00 0.00 0.00 0.00 0.00 00:13:52.828 =================================================================================================================== 00:13:52.828 Total : 23128.67 90.35 0.00 0.00 0.00 0.00 0.00 00:13:52.828 00:13:53.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:53.767 Nvme0n1 : 7.00 23152.57 90.44 0.00 0.00 0.00 0.00 0.00 00:13:53.767 =================================================================================================================== 00:13:53.767 Total : 23152.57 90.44 0.00 0.00 0.00 0.00 0.00 00:13:53.767 00:13:55.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:55.147 Nvme0n1 : 8.00 23146.38 90.42 0.00 0.00 0.00 0.00 0.00 00:13:55.147 =================================================================================================================== 00:13:55.147 Total : 23146.38 90.42 0.00 0.00 0.00 0.00 0.00 00:13:55.147 00:13:56.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:56.084 Nvme0n1 : 9.00 23170.11 90.51 0.00 0.00 0.00 0.00 0.00 00:13:56.084 =================================================================================================================== 00:13:56.084 Total : 23170.11 90.51 0.00 0.00 0.00 0.00 0.00 00:13:56.084 00:13:57.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.019 Nvme0n1 : 10.00 23189.00 90.58 0.00 0.00 0.00 0.00 0.00 00:13:57.019 =================================================================================================================== 00:13:57.019 Total : 23189.00 90.58 0.00 0.00 0.00 0.00 0.00 00:13:57.019 00:13:57.019 00:13:57.019 Latency(us) 00:13:57.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:57.019 Nvme0n1 : 10.00 23191.61 90.59 0.00 0.00 5515.76 1467.44 9061.06 00:13:57.019 =================================================================================================================== 00:13:57.019 Total : 23191.61 90.59 0.00 0.00 5515.76 1467.44 9061.06 00:13:57.019 0 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 957138 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@942 -- # '[' -z 957138 ']' 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # kill -0 957138 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # uname 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 957138 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 957138' 00:13:57.019 killing process with pid 957138 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # kill 957138 00:13:57.019 Received shutdown signal, test time was about 10.000000 seconds 00:13:57.019 00:13:57.019 Latency(us) 00:13:57.019 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.019 =================================================================================================================== 00:13:57.019 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # wait 957138 00:13:57.019 23:39:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.278 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:57.536 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:57.536 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:57.536 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:57.536 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:57.536 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:57.803 [2024-07-15 23:39:46.662797] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # local es=0 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:57.803 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:58.146 request: 00:13:58.146 { 00:13:58.146 "uuid": "cdbce982-755c-4f8d-bd89-f4480a1d86ef", 00:13:58.146 "method": "bdev_lvol_get_lvstores", 00:13:58.146 "req_id": 1 00:13:58.146 } 00:13:58.146 Got JSON-RPC error response 00:13:58.146 response: 00:13:58.146 { 00:13:58.146 "code": -19, 00:13:58.146 "message": "No such device" 00:13:58.146 } 00:13:58.146 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # es=1 00:13:58.146 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:13:58.146 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:13:58.146 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:13:58.146 23:39:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:58.146 aio_bdev 00:13:58.146 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3e379ef7-acfe-4d64-90a5-a2b0d27d1222 00:13:58.146 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@891 -- # local bdev_name=3e379ef7-acfe-4d64-90a5-a2b0d27d1222 00:13:58.146 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:58.146 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@893 -- # local i 00:13:58.146 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:58.146 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:58.146 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:58.405 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3e379ef7-acfe-4d64-90a5-a2b0d27d1222 -t 2000 00:13:58.405 [ 00:13:58.405 { 00:13:58.405 "name": "3e379ef7-acfe-4d64-90a5-a2b0d27d1222", 00:13:58.405 "aliases": [ 00:13:58.405 "lvs/lvol" 00:13:58.405 ], 00:13:58.405 "product_name": "Logical Volume", 00:13:58.405 "block_size": 4096, 00:13:58.405 "num_blocks": 38912, 00:13:58.405 "uuid": "3e379ef7-acfe-4d64-90a5-a2b0d27d1222", 00:13:58.405 "assigned_rate_limits": { 00:13:58.405 "rw_ios_per_sec": 0, 00:13:58.405 "rw_mbytes_per_sec": 0, 00:13:58.405 "r_mbytes_per_sec": 0, 00:13:58.405 "w_mbytes_per_sec": 0 00:13:58.405 }, 00:13:58.405 "claimed": false, 00:13:58.405 "zoned": false, 00:13:58.405 "supported_io_types": { 00:13:58.405 "read": true, 00:13:58.405 "write": true, 00:13:58.405 "unmap": true, 00:13:58.405 "flush": false, 00:13:58.405 "reset": true, 00:13:58.405 "nvme_admin": false, 00:13:58.405 "nvme_io": false, 00:13:58.405 "nvme_io_md": false, 00:13:58.405 "write_zeroes": true, 00:13:58.405 "zcopy": false, 00:13:58.405 "get_zone_info": false, 00:13:58.405 "zone_management": false, 00:13:58.405 "zone_append": false, 00:13:58.405 "compare": false, 00:13:58.405 "compare_and_write": false, 00:13:58.405 "abort": false, 00:13:58.405 "seek_hole": true, 00:13:58.405 "seek_data": true, 00:13:58.405 "copy": false, 00:13:58.405 "nvme_iov_md": false 00:13:58.405 }, 00:13:58.405 "driver_specific": { 00:13:58.405 "lvol": { 00:13:58.405 "lvol_store_uuid": "cdbce982-755c-4f8d-bd89-f4480a1d86ef", 00:13:58.405 "base_bdev": "aio_bdev", 00:13:58.405 "thin_provision": false, 00:13:58.405 "num_allocated_clusters": 38, 00:13:58.405 "snapshot": false, 00:13:58.405 "clone": false, 00:13:58.405 "esnap_clone": false 00:13:58.405 } 00:13:58.405 } 00:13:58.405 } 00:13:58.405 ] 00:13:58.405 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # return 0 00:13:58.405 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:58.405 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:58.663 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:58.663 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:58.663 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:58.922 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:58.922 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e379ef7-acfe-4d64-90a5-a2b0d27d1222 00:13:58.922 23:39:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cdbce982-755c-4f8d-bd89-f4480a1d86ef 00:13:59.180 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:59.438 00:13:59.438 real 0m15.059s 00:13:59.438 user 0m14.656s 00:13:59.438 sys 0m1.359s 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:59.438 ************************************ 00:13:59.438 END TEST lvs_grow_clean 00:13:59.438 ************************************ 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:59.438 ************************************ 00:13:59.438 START TEST lvs_grow_dirty 00:13:59.438 ************************************ 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1117 -- # lvs_grow dirty 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:59.438 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:59.439 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:59.439 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:59.696 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:59.696 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:59.955 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:13:59.955 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:59.955 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:13:59.955 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:59.955 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:59.955 23:39:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae lvol 150 00:14:00.213 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d8a2d2bb-dfcf-45fb-ad39-d8648704be21 00:14:00.213 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:00.213 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:00.471 [2024-07-15 23:39:49.228812] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:00.471 [2024-07-15 23:39:49.228866] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:00.471 true 00:14:00.471 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:00.471 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:00.471 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:00.471 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:00.729 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d8a2d2bb-dfcf-45fb-ad39-d8648704be21 00:14:00.987 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:00.987 [2024-07-15 23:39:49.894804] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.987 23:39:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=959607 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 959607 /var/tmp/bdevperf.sock 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 959607 ']' 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:01.245 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:01.245 [2024-07-15 23:39:50.130641] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:01.245 [2024-07-15 23:39:50.130690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959607 ] 00:14:01.245 [2024-07-15 23:39:50.183857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.504 [2024-07-15 23:39:50.264324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.071 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:02.071 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:14:02.071 23:39:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:02.329 Nvme0n1 00:14:02.329 23:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:02.588 [ 00:14:02.588 { 00:14:02.588 "name": "Nvme0n1", 00:14:02.588 "aliases": [ 00:14:02.588 "d8a2d2bb-dfcf-45fb-ad39-d8648704be21" 00:14:02.588 ], 00:14:02.588 "product_name": "NVMe disk", 00:14:02.588 "block_size": 4096, 00:14:02.588 "num_blocks": 38912, 00:14:02.588 "uuid": "d8a2d2bb-dfcf-45fb-ad39-d8648704be21", 00:14:02.588 "assigned_rate_limits": { 00:14:02.588 "rw_ios_per_sec": 0, 00:14:02.588 "rw_mbytes_per_sec": 0, 00:14:02.588 "r_mbytes_per_sec": 0, 00:14:02.588 "w_mbytes_per_sec": 0 00:14:02.588 }, 00:14:02.588 "claimed": false, 00:14:02.588 "zoned": false, 00:14:02.588 "supported_io_types": { 00:14:02.588 "read": true, 00:14:02.588 "write": true, 00:14:02.588 "unmap": true, 00:14:02.588 "flush": true, 00:14:02.588 "reset": true, 00:14:02.588 "nvme_admin": true, 00:14:02.588 "nvme_io": true, 00:14:02.588 "nvme_io_md": false, 00:14:02.588 "write_zeroes": true, 00:14:02.588 "zcopy": false, 00:14:02.588 "get_zone_info": false, 00:14:02.588 "zone_management": false, 00:14:02.588 "zone_append": false, 00:14:02.588 "compare": true, 00:14:02.588 "compare_and_write": true, 00:14:02.588 "abort": true, 00:14:02.588 "seek_hole": false, 00:14:02.588 "seek_data": false, 00:14:02.588 "copy": true, 00:14:02.588 "nvme_iov_md": false 00:14:02.588 }, 00:14:02.588 "memory_domains": [ 00:14:02.588 { 00:14:02.588 "dma_device_id": "system", 00:14:02.588 "dma_device_type": 1 00:14:02.588 } 00:14:02.588 ], 00:14:02.588 "driver_specific": { 00:14:02.588 "nvme": [ 00:14:02.588 { 00:14:02.588 "trid": { 00:14:02.588 "trtype": "TCP", 00:14:02.588 "adrfam": "IPv4", 00:14:02.588 "traddr": "10.0.0.2", 00:14:02.588 "trsvcid": "4420", 00:14:02.588 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:02.588 }, 00:14:02.588 "ctrlr_data": { 00:14:02.588 "cntlid": 1, 00:14:02.588 "vendor_id": "0x8086", 00:14:02.588 "model_number": "SPDK bdev Controller", 00:14:02.588 "serial_number": "SPDK0", 00:14:02.588 "firmware_revision": "24.09", 00:14:02.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:02.588 "oacs": { 00:14:02.588 "security": 0, 00:14:02.588 "format": 0, 00:14:02.588 "firmware": 0, 00:14:02.588 "ns_manage": 0 00:14:02.588 }, 00:14:02.588 "multi_ctrlr": true, 00:14:02.588 "ana_reporting": false 00:14:02.588 }, 00:14:02.588 "vs": { 00:14:02.588 "nvme_version": "1.3" 00:14:02.588 }, 00:14:02.588 "ns_data": { 00:14:02.588 "id": 1, 00:14:02.588 "can_share": true 00:14:02.588 } 00:14:02.588 } 00:14:02.588 ], 00:14:02.588 "mp_policy": "active_passive" 00:14:02.588 } 00:14:02.588 } 00:14:02.588 ] 00:14:02.588 23:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.588 23:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=959759 00:14:02.588 23:39:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:02.588 Running I/O for 10 seconds... 00:14:03.524 Latency(us) 00:14:03.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.524 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.524 Nvme0n1 : 1.00 22227.00 86.82 0.00 0.00 0.00 0.00 0.00 00:14:03.524 =================================================================================================================== 00:14:03.524 Total : 22227.00 86.82 0.00 0.00 0.00 0.00 0.00 00:14:03.524 00:14:04.459 23:39:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:04.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.459 Nvme0n1 : 2.00 22313.50 87.16 0.00 0.00 0.00 0.00 0.00 00:14:04.459 =================================================================================================================== 00:14:04.459 Total : 22313.50 87.16 0.00 0.00 0.00 0.00 0.00 00:14:04.459 00:14:04.718 true 00:14:04.718 23:39:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:04.718 23:39:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:04.976 23:39:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:04.976 23:39:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:04.976 23:39:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 959759 00:14:05.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.543 Nvme0n1 : 3.00 22307.67 87.14 0.00 0.00 0.00 0.00 0.00 00:14:05.543 =================================================================================================================== 00:14:05.543 Total : 22307.67 87.14 0.00 0.00 0.00 0.00 0.00 00:14:05.543 00:14:06.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.479 Nvme0n1 : 4.00 22370.75 87.39 0.00 0.00 0.00 0.00 0.00 00:14:06.479 =================================================================================================================== 00:14:06.479 Total : 22370.75 87.39 0.00 0.00 0.00 0.00 0.00 00:14:06.479 00:14:07.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.858 Nvme0n1 : 5.00 22400.60 87.50 0.00 0.00 0.00 0.00 0.00 00:14:07.859 =================================================================================================================== 00:14:07.859 Total : 22400.60 87.50 0.00 0.00 0.00 0.00 0.00 00:14:07.859 00:14:08.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.793 Nvme0n1 : 6.00 22432.50 87.63 0.00 0.00 0.00 0.00 0.00 00:14:08.793 =================================================================================================================== 00:14:08.793 Total : 22432.50 87.63 0.00 0.00 0.00 0.00 0.00 00:14:08.793 00:14:09.755 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.755 Nvme0n1 : 7.00 22457.57 87.72 0.00 0.00 0.00 0.00 0.00 00:14:09.755 =================================================================================================================== 00:14:09.755 Total : 22457.57 87.72 0.00 0.00 0.00 0.00 0.00 00:14:09.755 00:14:10.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.688 Nvme0n1 : 8.00 22481.38 87.82 0.00 0.00 0.00 0.00 0.00 00:14:10.688 =================================================================================================================== 00:14:10.688 Total : 22481.38 87.82 0.00 0.00 0.00 0.00 0.00 00:14:10.688 00:14:11.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.639 Nvme0n1 : 9.00 22462.56 87.74 0.00 0.00 0.00 0.00 0.00 00:14:11.639 =================================================================================================================== 00:14:11.639 Total : 22462.56 87.74 0.00 0.00 0.00 0.00 0.00 00:14:11.639 00:14:12.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.573 Nvme0n1 : 10.00 22446.70 87.68 0.00 0.00 0.00 0.00 0.00 00:14:12.573 =================================================================================================================== 00:14:12.573 Total : 22446.70 87.68 0.00 0.00 0.00 0.00 0.00 00:14:12.573 00:14:12.573 00:14:12.573 Latency(us) 00:14:12.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.573 Nvme0n1 : 10.01 22446.64 87.68 0.00 0.00 5698.28 1823.61 7750.34 00:14:12.573 =================================================================================================================== 00:14:12.573 Total : 22446.64 87.68 0.00 0.00 5698.28 1823.61 7750.34 00:14:12.573 0 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 959607 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@942 -- # '[' -z 959607 ']' 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # kill -0 959607 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # uname 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 959607 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # echo 'killing process with pid 959607' 00:14:12.573 killing process with pid 959607 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # kill 959607 00:14:12.573 Received shutdown signal, test time was about 10.000000 seconds 00:14:12.573 00:14:12.573 Latency(us) 00:14:12.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.573 =================================================================================================================== 00:14:12.573 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:12.573 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # wait 959607 00:14:12.832 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:13.091 23:40:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:13.091 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:13.091 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:13.349 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 956632 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 956632 00:14:13.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 956632 Killed "${NVMF_APP[@]}" "$@" 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=961587 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 961587 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 961587 ']' 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:13.350 23:40:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:13.350 [2024-07-15 23:40:02.318579] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:13.350 [2024-07-15 23:40:02.318625] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.609 [2024-07-15 23:40:02.376368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.609 [2024-07-15 23:40:02.452509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.609 [2024-07-15 23:40:02.452544] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.609 [2024-07-15 23:40:02.452552] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.609 [2024-07-15 23:40:02.452558] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.609 [2024-07-15 23:40:02.452562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.609 [2024-07-15 23:40:02.452579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.195 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:14.195 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:14:14.196 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.196 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:14.196 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:14.196 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.196 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:14.457 [2024-07-15 23:40:03.302570] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:14.457 [2024-07-15 23:40:03.302647] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:14.457 [2024-07-15 23:40:03.302671] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d8a2d2bb-dfcf-45fb-ad39-d8648704be21 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=d8a2d2bb-dfcf-45fb-ad39-d8648704be21 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:14.457 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:14.718 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d8a2d2bb-dfcf-45fb-ad39-d8648704be21 -t 2000 00:14:14.718 [ 00:14:14.718 { 00:14:14.718 "name": "d8a2d2bb-dfcf-45fb-ad39-d8648704be21", 00:14:14.718 "aliases": [ 00:14:14.718 "lvs/lvol" 00:14:14.718 ], 00:14:14.718 "product_name": "Logical Volume", 00:14:14.718 "block_size": 4096, 00:14:14.718 "num_blocks": 38912, 00:14:14.718 "uuid": "d8a2d2bb-dfcf-45fb-ad39-d8648704be21", 00:14:14.718 "assigned_rate_limits": { 00:14:14.718 "rw_ios_per_sec": 0, 00:14:14.718 "rw_mbytes_per_sec": 0, 00:14:14.718 "r_mbytes_per_sec": 0, 00:14:14.718 "w_mbytes_per_sec": 0 00:14:14.718 }, 00:14:14.718 "claimed": false, 00:14:14.718 "zoned": false, 00:14:14.718 "supported_io_types": { 00:14:14.718 "read": true, 00:14:14.718 "write": true, 00:14:14.718 "unmap": true, 00:14:14.718 "flush": false, 00:14:14.718 "reset": true, 00:14:14.718 "nvme_admin": false, 00:14:14.718 "nvme_io": false, 00:14:14.718 "nvme_io_md": false, 00:14:14.718 "write_zeroes": true, 00:14:14.718 "zcopy": false, 00:14:14.718 "get_zone_info": false, 00:14:14.718 "zone_management": false, 00:14:14.718 "zone_append": false, 00:14:14.718 "compare": false, 00:14:14.718 "compare_and_write": false, 00:14:14.718 "abort": false, 00:14:14.718 "seek_hole": true, 00:14:14.718 "seek_data": true, 00:14:14.718 "copy": false, 00:14:14.718 "nvme_iov_md": false 00:14:14.718 }, 00:14:14.718 "driver_specific": { 00:14:14.718 "lvol": { 00:14:14.718 "lvol_store_uuid": "b9b045a1-f8d5-44d1-b8b9-ab5a41811dae", 00:14:14.718 "base_bdev": "aio_bdev", 00:14:14.718 "thin_provision": false, 00:14:14.718 "num_allocated_clusters": 38, 00:14:14.718 "snapshot": false, 00:14:14.718 "clone": false, 00:14:14.718 "esnap_clone": false 00:14:14.718 } 00:14:14.718 } 00:14:14.718 } 00:14:14.718 ] 00:14:14.718 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:14:14.718 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:14.718 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:14.977 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:14.977 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:14.977 23:40:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:15.237 [2024-07-15 23:40:04.151311] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # local es=0 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:15.237 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:15.496 request: 00:14:15.496 { 00:14:15.496 "uuid": "b9b045a1-f8d5-44d1-b8b9-ab5a41811dae", 00:14:15.496 "method": "bdev_lvol_get_lvstores", 00:14:15.496 "req_id": 1 00:14:15.496 } 00:14:15.496 Got JSON-RPC error response 00:14:15.496 response: 00:14:15.496 { 00:14:15.496 "code": -19, 00:14:15.496 "message": "No such device" 00:14:15.496 } 00:14:15.496 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # es=1 00:14:15.496 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:14:15.496 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:14:15.496 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:14:15.496 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:15.756 aio_bdev 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d8a2d2bb-dfcf-45fb-ad39-d8648704be21 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=d8a2d2bb-dfcf-45fb-ad39-d8648704be21 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:15.756 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d8a2d2bb-dfcf-45fb-ad39-d8648704be21 -t 2000 00:14:16.015 [ 00:14:16.015 { 00:14:16.015 "name": "d8a2d2bb-dfcf-45fb-ad39-d8648704be21", 00:14:16.015 "aliases": [ 00:14:16.015 "lvs/lvol" 00:14:16.015 ], 00:14:16.015 "product_name": "Logical Volume", 00:14:16.015 "block_size": 4096, 00:14:16.015 "num_blocks": 38912, 00:14:16.015 "uuid": "d8a2d2bb-dfcf-45fb-ad39-d8648704be21", 00:14:16.015 "assigned_rate_limits": { 00:14:16.015 "rw_ios_per_sec": 0, 00:14:16.015 "rw_mbytes_per_sec": 0, 00:14:16.015 "r_mbytes_per_sec": 0, 00:14:16.015 "w_mbytes_per_sec": 0 00:14:16.015 }, 00:14:16.015 "claimed": false, 00:14:16.015 "zoned": false, 00:14:16.015 "supported_io_types": { 00:14:16.015 "read": true, 00:14:16.015 "write": true, 00:14:16.015 "unmap": true, 00:14:16.015 "flush": false, 00:14:16.015 "reset": true, 00:14:16.015 "nvme_admin": false, 00:14:16.015 "nvme_io": false, 00:14:16.015 "nvme_io_md": false, 00:14:16.015 "write_zeroes": true, 00:14:16.015 "zcopy": false, 00:14:16.015 "get_zone_info": false, 00:14:16.015 "zone_management": false, 00:14:16.015 "zone_append": false, 00:14:16.015 "compare": false, 00:14:16.015 "compare_and_write": false, 00:14:16.015 "abort": false, 00:14:16.015 "seek_hole": true, 00:14:16.015 "seek_data": true, 00:14:16.015 "copy": false, 00:14:16.015 "nvme_iov_md": false 00:14:16.015 }, 00:14:16.015 "driver_specific": { 00:14:16.015 "lvol": { 00:14:16.015 "lvol_store_uuid": "b9b045a1-f8d5-44d1-b8b9-ab5a41811dae", 00:14:16.015 "base_bdev": "aio_bdev", 00:14:16.015 "thin_provision": false, 00:14:16.015 "num_allocated_clusters": 38, 00:14:16.015 "snapshot": false, 00:14:16.015 "clone": false, 00:14:16.015 "esnap_clone": false 00:14:16.015 } 00:14:16.015 } 00:14:16.015 } 00:14:16.015 ] 00:14:16.015 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:14:16.015 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:16.015 23:40:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:16.274 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:16.274 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:16.274 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:16.274 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:16.274 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d8a2d2bb-dfcf-45fb-ad39-d8648704be21 00:14:16.533 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9b045a1-f8d5-44d1-b8b9-ab5a41811dae 00:14:16.791 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:16.791 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:17.051 00:14:17.051 real 0m17.434s 00:14:17.051 user 0m43.857s 00:14:17.051 sys 0m4.010s 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:17.051 ************************************ 00:14:17.051 END TEST lvs_grow_dirty 00:14:17.051 ************************************ 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@800 -- # type=--id 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@801 -- # id=0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # for n in $shm_files 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:17.051 nvmf_trace.0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # return 0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.051 rmmod nvme_tcp 00:14:17.051 rmmod nvme_fabrics 00:14:17.051 rmmod nvme_keyring 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 961587 ']' 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 961587 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@942 -- # '[' -z 961587 ']' 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # kill -0 961587 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # uname 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 961587 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # echo 'killing process with pid 961587' 00:14:17.051 killing process with pid 961587 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # kill 961587 00:14:17.051 23:40:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # wait 961587 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.377 23:40:06 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.305 23:40:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.305 00:14:19.305 real 0m41.290s 00:14:19.305 user 1m4.118s 00:14:19.305 sys 0m9.634s 00:14:19.305 23:40:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:19.305 23:40:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:19.305 ************************************ 00:14:19.305 END TEST nvmf_lvs_grow 00:14:19.305 ************************************ 00:14:19.305 23:40:08 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:19.305 23:40:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:19.305 23:40:08 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:19.305 23:40:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:19.305 23:40:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.305 ************************************ 00:14:19.305 START TEST nvmf_bdev_io_wait 00:14:19.305 ************************************ 00:14:19.305 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:19.564 * Looking for test storage... 00:14:19.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.564 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.565 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.565 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.565 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.565 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.565 23:40:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:24.862 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:24.862 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:24.862 Found net devices under 0000:86:00.0: cvl_0_0 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:24.862 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:24.863 Found net devices under 0000:86:00.1: cvl_0_1 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:14:24.863 00:14:24.863 --- 10.0.0.2 ping statistics --- 00:14:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.863 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:14:24.863 00:14:24.863 --- 10.0.0.1 ping statistics --- 00:14:24.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.863 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:24.863 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=965713 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 965713 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@823 -- # '[' -z 965713 ']' 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:25.132 23:40:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:25.132 [2024-07-15 23:40:13.890668] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:25.132 [2024-07-15 23:40:13.890712] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.132 [2024-07-15 23:40:13.952140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.132 [2024-07-15 23:40:14.031001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.132 [2024-07-15 23:40:14.031039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.132 [2024-07-15 23:40:14.031046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.132 [2024-07-15 23:40:14.031051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.132 [2024-07-15 23:40:14.031057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.132 [2024-07-15 23:40:14.031147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.132 [2024-07-15 23:40:14.031247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.132 [2024-07-15 23:40:14.031267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.132 [2024-07-15 23:40:14.031288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # return 0 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 [2024-07-15 23:40:14.821499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 Malloc0 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.066 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:26.067 [2024-07-15 23:40:14.884049] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=965889 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=965891 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.067 { 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme$subsystem", 00:14:26.067 "trtype": "$TEST_TRANSPORT", 00:14:26.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "$NVMF_PORT", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.067 "hdgst": ${hdgst:-false}, 00:14:26.067 "ddgst": ${ddgst:-false} 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 } 00:14:26.067 EOF 00:14:26.067 )") 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=965893 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.067 { 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme$subsystem", 00:14:26.067 "trtype": "$TEST_TRANSPORT", 00:14:26.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "$NVMF_PORT", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.067 "hdgst": ${hdgst:-false}, 00:14:26.067 "ddgst": ${ddgst:-false} 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 } 00:14:26.067 EOF 00:14:26.067 )") 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=965896 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.067 { 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme$subsystem", 00:14:26.067 "trtype": "$TEST_TRANSPORT", 00:14:26.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "$NVMF_PORT", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.067 "hdgst": ${hdgst:-false}, 00:14:26.067 "ddgst": ${ddgst:-false} 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 } 00:14:26.067 EOF 00:14:26.067 )") 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:26.067 { 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme$subsystem", 00:14:26.067 "trtype": "$TEST_TRANSPORT", 00:14:26.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "$NVMF_PORT", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:26.067 "hdgst": ${hdgst:-false}, 00:14:26.067 "ddgst": ${ddgst:-false} 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 } 00:14:26.067 EOF 00:14:26.067 )") 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 965889 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme1", 00:14:26.067 "trtype": "tcp", 00:14:26.067 "traddr": "10.0.0.2", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "4420", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.067 "hdgst": false, 00:14:26.067 "ddgst": false 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 }' 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme1", 00:14:26.067 "trtype": "tcp", 00:14:26.067 "traddr": "10.0.0.2", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "4420", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.067 "hdgst": false, 00:14:26.067 "ddgst": false 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 }' 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme1", 00:14:26.067 "trtype": "tcp", 00:14:26.067 "traddr": "10.0.0.2", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "4420", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.067 "hdgst": false, 00:14:26.067 "ddgst": false 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 }' 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:26.067 23:40:14 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:26.067 "params": { 00:14:26.067 "name": "Nvme1", 00:14:26.067 "trtype": "tcp", 00:14:26.067 "traddr": "10.0.0.2", 00:14:26.067 "adrfam": "ipv4", 00:14:26.067 "trsvcid": "4420", 00:14:26.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:26.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:26.067 "hdgst": false, 00:14:26.067 "ddgst": false 00:14:26.067 }, 00:14:26.067 "method": "bdev_nvme_attach_controller" 00:14:26.067 }' 00:14:26.067 [2024-07-15 23:40:14.934243] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:26.067 [2024-07-15 23:40:14.934292] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:26.067 [2024-07-15 23:40:14.935469] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:26.067 [2024-07-15 23:40:14.935523] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:26.067 [2024-07-15 23:40:14.937004] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:26.067 [2024-07-15 23:40:14.937008] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:26.067 [2024-07-15 23:40:14.937044] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 23:40:14.937045] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:26.067 --proc-type=auto ] 00:14:26.326 [2024-07-15 23:40:15.113163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.326 [2024-07-15 23:40:15.191326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:26.326 [2024-07-15 23:40:15.204033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.326 [2024-07-15 23:40:15.282280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:26.326 [2024-07-15 23:40:15.295750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.584 [2024-07-15 23:40:15.341614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.584 [2024-07-15 23:40:15.380512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:26.584 [2024-07-15 23:40:15.417594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:26.584 Running I/O for 1 seconds... 00:14:26.843 Running I/O for 1 seconds... 00:14:26.843 Running I/O for 1 seconds... 00:14:26.843 Running I/O for 1 seconds... 00:14:27.777 00:14:27.777 Latency(us) 00:14:27.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.777 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:27.777 Nvme1n1 : 1.00 244600.16 955.47 0.00 0.00 521.33 214.59 637.55 00:14:27.777 =================================================================================================================== 00:14:27.777 Total : 244600.16 955.47 0.00 0.00 521.33 214.59 637.55 00:14:27.777 00:14:27.777 Latency(us) 00:14:27.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.777 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:27.777 Nvme1n1 : 1.02 7710.39 30.12 0.00 0.00 16422.40 6154.69 25872.47 00:14:27.777 =================================================================================================================== 00:14:27.777 Total : 7710.39 30.12 0.00 0.00 16422.40 6154.69 25872.47 00:14:27.777 00:14:27.777 Latency(us) 00:14:27.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.777 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:27.777 Nvme1n1 : 1.01 12577.47 49.13 0.00 0.00 10144.33 5784.26 21085.50 00:14:27.777 =================================================================================================================== 00:14:27.777 Total : 12577.47 49.13 0.00 0.00 10144.33 5784.26 21085.50 00:14:27.777 00:14:27.777 Latency(us) 00:14:27.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.777 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:27.777 Nvme1n1 : 1.00 7578.87 29.60 0.00 0.00 16844.93 4900.95 40347.38 00:14:27.777 =================================================================================================================== 00:14:27.777 Total : 7578.87 29.60 0.00 0.00 16844.93 4900.95 40347.38 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 965891 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 965893 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 965896 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.036 23:40:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:28.036 rmmod nvme_tcp 00:14:28.036 rmmod nvme_fabrics 00:14:28.036 rmmod nvme_keyring 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 965713 ']' 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 965713 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@942 -- # '[' -z 965713 ']' 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # kill -0 965713 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # uname 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 965713 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # echo 'killing process with pid 965713' 00:14:28.295 killing process with pid 965713 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # kill 965713 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # wait 965713 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.295 23:40:17 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.828 23:40:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.828 00:14:30.828 real 0m11.039s 00:14:30.828 user 0m20.107s 00:14:30.828 sys 0m5.662s 00:14:30.828 23:40:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:30.828 23:40:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:30.828 ************************************ 00:14:30.828 END TEST nvmf_bdev_io_wait 00:14:30.828 ************************************ 00:14:30.828 23:40:19 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:30.828 23:40:19 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:30.828 23:40:19 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:30.828 23:40:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:30.828 23:40:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:30.828 ************************************ 00:14:30.828 START TEST nvmf_queue_depth 00:14:30.828 ************************************ 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:30.828 * Looking for test storage... 00:14:30.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.828 23:40:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.829 23:40:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:36.090 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:36.090 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.090 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:36.091 Found net devices under 0000:86:00.0: cvl_0_0 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:36.091 Found net devices under 0000:86:00.1: cvl_0_1 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:14:36.091 00:14:36.091 --- 10.0.0.2 ping statistics --- 00:14:36.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.091 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:14:36.091 00:14:36.091 --- 10.0.0.1 ping statistics --- 00:14:36.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.091 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=969664 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 969664 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 969664 ']' 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:36.091 23:40:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.091 [2024-07-15 23:40:24.640050] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:36.091 [2024-07-15 23:40:24.640094] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.091 [2024-07-15 23:40:24.695657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.091 [2024-07-15 23:40:24.774272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.091 [2024-07-15 23:40:24.774304] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.091 [2024-07-15 23:40:24.774311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.091 [2024-07-15 23:40:24.774317] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.091 [2024-07-15 23:40:24.774322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.091 [2024-07-15 23:40:24.774338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.657 [2024-07-15 23:40:25.477556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.657 Malloc0 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:36.657 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.658 [2024-07-15 23:40:25.530751] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=969909 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 969909 /var/tmp/bdevperf.sock 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 969909 ']' 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:36.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:36.658 23:40:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:36.658 [2024-07-15 23:40:25.578432] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:36.658 [2024-07-15 23:40:25.578475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969909 ] 00:14:36.916 [2024-07-15 23:40:25.632371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.916 [2024-07-15 23:40:25.706298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.482 23:40:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:37.482 23:40:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:14:37.482 23:40:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:37.482 23:40:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:37.482 23:40:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:37.482 NVMe0n1 00:14:37.482 23:40:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:37.741 23:40:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:37.741 Running I/O for 10 seconds... 00:14:47.727 00:14:47.727 Latency(us) 00:14:47.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.727 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:47.727 Verification LBA range: start 0x0 length 0x4000 00:14:47.727 NVMe0n1 : 10.05 12138.76 47.42 0.00 0.00 84071.38 6810.05 65194.07 00:14:47.727 =================================================================================================================== 00:14:47.727 Total : 12138.76 47.42 0.00 0.00 84071.38 6810.05 65194.07 00:14:47.727 0 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 969909 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 969909 ']' 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 969909 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 969909 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 969909' 00:14:47.727 killing process with pid 969909 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 969909 00:14:47.727 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.727 00:14:47.727 Latency(us) 00:14:47.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.727 =================================================================================================================== 00:14:47.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:47.727 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 969909 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.023 rmmod nvme_tcp 00:14:48.023 rmmod nvme_fabrics 00:14:48.023 rmmod nvme_keyring 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 969664 ']' 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 969664 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 969664 ']' 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 969664 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 969664 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 969664' 00:14:48.023 killing process with pid 969664 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 969664 00:14:48.023 23:40:36 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 969664 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.283 23:40:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.817 23:40:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.817 00:14:50.817 real 0m19.870s 00:14:50.817 user 0m24.661s 00:14:50.817 sys 0m5.312s 00:14:50.817 23:40:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:50.817 23:40:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:50.817 ************************************ 00:14:50.817 END TEST nvmf_queue_depth 00:14:50.817 ************************************ 00:14:50.817 23:40:39 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:50.817 23:40:39 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:50.817 23:40:39 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:50.817 23:40:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:50.817 23:40:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:50.817 ************************************ 00:14:50.817 START TEST nvmf_target_multipath 00:14:50.817 ************************************ 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:50.817 * Looking for test storage... 00:14:50.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.817 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:50.818 23:40:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:56.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:56.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:56.087 Found net devices under 0000:86:00.0: cvl_0_0 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:56.087 Found net devices under 0000:86:00.1: cvl_0_1 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:14:56.087 00:14:56.087 --- 10.0.0.2 ping statistics --- 00:14:56.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.087 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:14:56.087 00:14:56.087 --- 10.0.0.1 ping statistics --- 00:14:56.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.087 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:14:56.087 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:56.088 only one NIC for nvmf test 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.088 23:40:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.088 rmmod nvme_tcp 00:14:56.088 rmmod nvme_fabrics 00:14:56.088 rmmod nvme_keyring 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.088 23:40:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:58.622 00:14:58.622 real 0m7.854s 00:14:58.622 user 0m1.564s 00:14:58.622 sys 0m4.270s 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:58.622 23:40:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:58.622 ************************************ 00:14:58.622 END TEST nvmf_target_multipath 00:14:58.622 ************************************ 00:14:58.622 23:40:47 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:14:58.622 23:40:47 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:58.622 23:40:47 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:58.622 23:40:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:58.622 23:40:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:58.622 ************************************ 00:14:58.622 START TEST nvmf_zcopy 00:14:58.622 ************************************ 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:58.622 * Looking for test storage... 00:14:58.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.622 23:40:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:58.623 23:40:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.895 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:03.896 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:03.896 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:03.896 Found net devices under 0000:86:00.0: cvl_0_0 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:03.896 Found net devices under 0000:86:00.1: cvl_0_1 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:03.896 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:04.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:15:04.156 00:15:04.156 --- 10.0.0.2 ping statistics --- 00:15:04.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.156 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:15:04.156 00:15:04.156 --- 10.0.0.1 ping statistics --- 00:15:04.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.156 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=978731 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 978731 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@823 -- # '[' -z 978731 ']' 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:04.156 23:40:52 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:04.156 [2024-07-15 23:40:52.976427] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:15:04.156 [2024-07-15 23:40:52.976468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.156 [2024-07-15 23:40:53.034369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.156 [2024-07-15 23:40:53.103997] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.156 [2024-07-15 23:40:53.104039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.156 [2024-07-15 23:40:53.104048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.156 [2024-07-15 23:40:53.104055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.156 [2024-07-15 23:40:53.104061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.156 [2024-07-15 23:40:53.104089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # return 0 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.091 [2024-07-15 23:40:53.806925] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.091 [2024-07-15 23:40:53.823079] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.091 malloc0 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:05.091 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:05.091 { 00:15:05.091 "params": { 00:15:05.091 "name": "Nvme$subsystem", 00:15:05.091 "trtype": "$TEST_TRANSPORT", 00:15:05.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:05.091 "adrfam": "ipv4", 00:15:05.091 "trsvcid": "$NVMF_PORT", 00:15:05.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:05.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:05.091 "hdgst": ${hdgst:-false}, 00:15:05.091 "ddgst": ${ddgst:-false} 00:15:05.091 }, 00:15:05.091 "method": "bdev_nvme_attach_controller" 00:15:05.091 } 00:15:05.091 EOF 00:15:05.091 )") 00:15:05.092 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:05.092 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:05.092 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:05.092 23:40:53 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:05.092 "params": { 00:15:05.092 "name": "Nvme1", 00:15:05.092 "trtype": "tcp", 00:15:05.092 "traddr": "10.0.0.2", 00:15:05.092 "adrfam": "ipv4", 00:15:05.092 "trsvcid": "4420", 00:15:05.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:05.092 "hdgst": false, 00:15:05.092 "ddgst": false 00:15:05.092 }, 00:15:05.092 "method": "bdev_nvme_attach_controller" 00:15:05.092 }' 00:15:05.092 [2024-07-15 23:40:53.901821] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:15:05.092 [2024-07-15 23:40:53.901864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978814 ] 00:15:05.092 [2024-07-15 23:40:53.954972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.092 [2024-07-15 23:40:54.033690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.349 Running I/O for 10 seconds... 00:15:17.592 00:15:17.592 Latency(us) 00:15:17.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.592 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:17.592 Verification LBA range: start 0x0 length 0x1000 00:15:17.592 Nvme1n1 : 10.01 8712.29 68.06 0.00 0.00 14649.45 630.43 27354.16 00:15:17.592 =================================================================================================================== 00:15:17.592 Total : 8712.29 68.06 0.00 0.00 14649.45 630.43 27354.16 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=980639 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:17.592 { 00:15:17.592 "params": { 00:15:17.592 "name": "Nvme$subsystem", 00:15:17.592 "trtype": "$TEST_TRANSPORT", 00:15:17.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:17.592 "adrfam": "ipv4", 00:15:17.592 "trsvcid": "$NVMF_PORT", 00:15:17.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:17.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:17.592 "hdgst": ${hdgst:-false}, 00:15:17.592 "ddgst": ${ddgst:-false} 00:15:17.592 }, 00:15:17.592 "method": "bdev_nvme_attach_controller" 00:15:17.592 } 00:15:17.592 EOF 00:15:17.592 )") 00:15:17.592 [2024-07-15 23:41:04.532266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.532298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:17.592 [2024-07-15 23:41:04.540266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.540285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:17.592 23:41:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:17.592 "params": { 00:15:17.592 "name": "Nvme1", 00:15:17.592 "trtype": "tcp", 00:15:17.592 "traddr": "10.0.0.2", 00:15:17.592 "adrfam": "ipv4", 00:15:17.592 "trsvcid": "4420", 00:15:17.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:17.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:17.592 "hdgst": false, 00:15:17.592 "ddgst": false 00:15:17.592 }, 00:15:17.592 "method": "bdev_nvme_attach_controller" 00:15:17.592 }' 00:15:17.592 [2024-07-15 23:41:04.548274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.548287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.556288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.556299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.564309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.564320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.572330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.572343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.572750] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:15:17.592 [2024-07-15 23:41:04.572790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980639 ] 00:15:17.592 [2024-07-15 23:41:04.580358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.580373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.588372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.588383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.596391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.596402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.604411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.604422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.612433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.612444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.620455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.620466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.627207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.592 [2024-07-15 23:41:04.628478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.628489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.636498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.636511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.644518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.644529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.652541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.652552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.660563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.660575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.668594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.668614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.676609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.676621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.684632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.684644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.692650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.692661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.700683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.592 [2024-07-15 23:41:04.700696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.592 [2024-07-15 23:41:04.701738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.593 [2024-07-15 23:41:04.708700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.708712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.716731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.716749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.724748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.724763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.732774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.732787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.740789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.740801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.748810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.748822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.756834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.756847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.764857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.764871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.772876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.772886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.780898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.780910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.788932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.788950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.796946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.796962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.804969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.804987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.812990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.813006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.821008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.821019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.829027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.829043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.837051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.837061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.845073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.845083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.853097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.853110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.861120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.861135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.869142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.869156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.877165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.877178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.885186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.885198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.893212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.893231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 Running I/O for 5 seconds... 00:15:17.593 [2024-07-15 23:41:04.901235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.901246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.913682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.913700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.924405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.924424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.933193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.933212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.941742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.941761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.950907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.950925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.959714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.959733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.968936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.968955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.977574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.977592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.986126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.986144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:04.995298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:04.995316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.003831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.003849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.013063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.013082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.021703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.021725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.030101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.030119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.039004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.039022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.047450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.047469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.056181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.056200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.065185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.065203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.073873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.073891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.083140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.083159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.092381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.092403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.101406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.101425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.109866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.109885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.119303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.119322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.127658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.127676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.136364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.136383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.144859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.144878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.153187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.153205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.161807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.161826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.170385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.170404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.178803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.593 [2024-07-15 23:41:05.178821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.593 [2024-07-15 23:41:05.187617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.187636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.196076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.196094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.205072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.205090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.214143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.214162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.223057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.223075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.232458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.232477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.241119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.241138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.249838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.249858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.259155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.259174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.267849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.267870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.277259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.277279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.286147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.286166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.295001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.295020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.303718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.303736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.312859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.312878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.321467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.321501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.330940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.330964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.339544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.339564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.348668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.348687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.357679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.357698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.366769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.366787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.376475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.376494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.385232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.385251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.394500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.394519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.403168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.403188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.411719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.411738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.420288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.420307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.428847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.428866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.437505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.437524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.445930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.445949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.455371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.455391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.464861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.464881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.474072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.474091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.483317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.483335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.492006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.492030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.501679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.501698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.510722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.510740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.519100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.519119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.527684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.527702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.534536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.534554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.545427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.545447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.554413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.554432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.563057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.563076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.571818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.571837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.581076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.581096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.590373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.590392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.599011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.599031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.607635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.607654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.616425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.616444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.625052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.625074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.634581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.634600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.644142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.644161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.651164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.651181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.662370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.662392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.671312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.671330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.679938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.679956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.688217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.688241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.594 [2024-07-15 23:41:05.695121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.594 [2024-07-15 23:41:05.695139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.705827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.705845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.714784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.714803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.723399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.723418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.732395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.732413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.741811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.741830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.750839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.750857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.759623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.759641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.768161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.768179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.776709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.776727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.785917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.785935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.794388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.794407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.803536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.803554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.812924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.812943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.821781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.821801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.830423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.830446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.839157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.839176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.848128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.848146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.857245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.857264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.866420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.866439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.875460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.875478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.883994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.884012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.893093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.893111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.901647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.901665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.910921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.910938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.919969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.919986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.929209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.929234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.938483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.938502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.947139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.947157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.955884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.955907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.964273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.964291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.972840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.972858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.981619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.981637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.990886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.990905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:05.999866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:05.999888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.008482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.008503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.016885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.016903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.025857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.025876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.035381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.035400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.044189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.044208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.053052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.053070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.062436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.062454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.070953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.070971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.080245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.080262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.089091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.089110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.098268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.098287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.107382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.107400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.116426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.116444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.125200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.125217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.134412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.134430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.143233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.143251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.152588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.152606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.162000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.162019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.171310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.171329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.179812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.179830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.188996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.189013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.198178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.198197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.207176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.207194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.595 [2024-07-15 23:41:06.216141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.595 [2024-07-15 23:41:06.216158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.225206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.225233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.234421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.234439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.243458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.243476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.252749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.252766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.261574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.261593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.271084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.271102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.280140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.280159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.289641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.289659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.298265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.298283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.306984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.307003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.316064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.316082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.325146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.325165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.333721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.333740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.342884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.342902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.352077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.352095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.361271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.361290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.370521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.370540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.379104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.379122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.387696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.387714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.397117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.397136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.404030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.404048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.415010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.415032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.423211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.423233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.432570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.432588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.441215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.441238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.450488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.450506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.459073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.459091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.467475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.467494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.476139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.476158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.485336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.485354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.494222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.494246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.503159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.503177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.512315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.512333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.520755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.520773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.529120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.529138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.537980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.537998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.547386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.547405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.596 [2024-07-15 23:41:06.555970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.596 [2024-07-15 23:41:06.555988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.564889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.564909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.573547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.573565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.582098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.582117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.590712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.590730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.600119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.600137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.608977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.608995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.617520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.617538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.625866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.625884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.635040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.635060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.644828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.644847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.653714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.653732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.662998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.663017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.671664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.671687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.680775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.680795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.689760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.689779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.698466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.698486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.707074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.707093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.715937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.715956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.724215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.724241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.733771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.733789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.742415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.742437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.750970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.750989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.759662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.759680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.769318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.769337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.777906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.777925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.787166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.787185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.795807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.795826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.804458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.804478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.813486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.813505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:17.857 [2024-07-15 23:41:06.822157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:17.857 [2024-07-15 23:41:06.822175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.831526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.831555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.841009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.841033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.847942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.847961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.858972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.858992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.868420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.868439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.876782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.876800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.885523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.885542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.894338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.894357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.903702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.903720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.912451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.912469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.921735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.921754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.930959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.930978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.940061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.940081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.948496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.948514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.957542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.957560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.966177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.966196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.975479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.975497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.983931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.983950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:06.992741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:06.992761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.001270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.001288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.010661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.010683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.019298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.019316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.028047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.028065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.036638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.036656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.045922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.045941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.055033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.055051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.064118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.064136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.073229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.073247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.117 [2024-07-15 23:41:07.081770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.117 [2024-07-15 23:41:07.081788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.090873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.090892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.099708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.099726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.108374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.108393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.116664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.116681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.125345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.125363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.134547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.134565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.143254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.143272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.152368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.152386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.160708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.160727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.169127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.169145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.178241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.178263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.185027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.185044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.195945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.195963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.204406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.204424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.212774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.212792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.221272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.221290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.229786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.229803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.236914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.236932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.247092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.247110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.256338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.256356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.264875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.264894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.273205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.273223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.282266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.282284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.290796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.290815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.299095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.299115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.307775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.307793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.316265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.316284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.324786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.324804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.333098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.333116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.377 [2024-07-15 23:41:07.342491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.377 [2024-07-15 23:41:07.342516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.351529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.351547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.360223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.360246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.369249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.369268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.378588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.378607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.387604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.387623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.396149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.396167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.404293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.404311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.669 [2024-07-15 23:41:07.412942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.669 [2024-07-15 23:41:07.412960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.422112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.422130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.430656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.430675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.439545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.439563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.448234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.448251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.457032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.457050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.465327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.465345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.473998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.474016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.482771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.482789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.491953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.491971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.501209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.501232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.510416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.510434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.518819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.518839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.527902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.527921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.537272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.537292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.545801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.545819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.554574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.554593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.563042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.563061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.572044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.572062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.581659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.581678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.591463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.591482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.600315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.600333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.608985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.609003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.618302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.618320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.627104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.627122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.670 [2024-07-15 23:41:07.635571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.670 [2024-07-15 23:41:07.635590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.644766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.644785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.653364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.653382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.662337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.662355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.670698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.670716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.679663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.679681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.688991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.689009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.698268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.698286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.707293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.707311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.716513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.716531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.725082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.725100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.734420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.734439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.743162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.743180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.752777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.752796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.761477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.761497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.769994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.770013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.779093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.779112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.788260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.788278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.796886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.796905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.805604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.805623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.814709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.814727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.824013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.824031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.833201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.833219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.842567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.842586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.851801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.851820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.860956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.860975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.870500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.870520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.879851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.879873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.888907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.888925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.930 [2024-07-15 23:41:07.897504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.930 [2024-07-15 23:41:07.897522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.906191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.906209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.913099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.913117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.924031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.924054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.933020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.933038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.941548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.941566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.950144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.950162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.958731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.958749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.968294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.968312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.976914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.976932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.985375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.985394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:07.994556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:07.994574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.003158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.003177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.012564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.012586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.021243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.021262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.030497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.030517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.039251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.039269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.048424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.048443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.057148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.057168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.065912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.065932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.075183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.075202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.084008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.084028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.093626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.093645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.102961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.102980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.112357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.112377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.122177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.122195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.131249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.131268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.140555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.140574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.149478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.149496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.190 [2024-07-15 23:41:08.158938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.190 [2024-07-15 23:41:08.158957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.167658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.167678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.176965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.176983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.186443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.186466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.195651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.195670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.204872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.204892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.214201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.214220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.223195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.223215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.232477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.232496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.241475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.241494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.250894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.250913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.260042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.260062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.269108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.450 [2024-07-15 23:41:08.269126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.450 [2024-07-15 23:41:08.277707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.277726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.286306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.286325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.295731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.295750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.304623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.304642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.313528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.313547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.322234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.322253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.331446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.331465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.340467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.340486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.349522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.349541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.358943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.358966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.367611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.367631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.376223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.376248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.384649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.384669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.393724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.393742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.402185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.402204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.411393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.411412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.451 [2024-07-15 23:41:08.420438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.451 [2024-07-15 23:41:08.420457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.429868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.429886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.438996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.439014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.447651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.447669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.456800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.456818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.465948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.465966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.472827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.472845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.483931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.483950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.492663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.492682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.501396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.501415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.510372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.510391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.524892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.524911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.532520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.532542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.540259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.540278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.548979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.548997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.557397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.557416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.565828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.565847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.574951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.574969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.584113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.584133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.592949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.592967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.602271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.602289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.611443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.611462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.620280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.620297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.628817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.628835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.637552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.637571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.645848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.645867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.654511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.654529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.663117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.663135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.671812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.671831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.711 [2024-07-15 23:41:08.680544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.711 [2024-07-15 23:41:08.680562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.687618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.687636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.698622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.698644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.707259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.707278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.716414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.716432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.725031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.725049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.734323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.734341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.742877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.742896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.751396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.751415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.760148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.760166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.769374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.769393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.776300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.776318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.787080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.787099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.796517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.796535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.805589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.805609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.812839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.812857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.823143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.823162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.832164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.832184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.840836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.840855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.850141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.850160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.859861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.859879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.868621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.868640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.877889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.877909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.886533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.886553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.895027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.895046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.903574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.903591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.912057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.912076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.921234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.921252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.930089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.930108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.971 [2024-07-15 23:41:08.938392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.971 [2024-07-15 23:41:08.938411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:08.947729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:08.947748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:08.956637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:08.956655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:08.965145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:08.965164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:08.974123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:08.974142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:08.983528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:08.983546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:08.992824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:08.992843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.001983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.002001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.010701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.010719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.019580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.019599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.067914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.067932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.076564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.076583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.084991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.085009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.094266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.094285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.101326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.101344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.111665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.111683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.120927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.120945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.129312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.129331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.231 [2024-07-15 23:41:09.138998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.231 [2024-07-15 23:41:09.139017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.232 [2024-07-15 23:41:09.147348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.232 [2024-07-15 23:41:09.147366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.232 [2024-07-15 23:41:09.156179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.232 [2024-07-15 23:41:09.156197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.232 [2024-07-15 23:41:09.164708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.232 [2024-07-15 23:41:09.164726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.232 [2024-07-15 23:41:09.173153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.232 [2024-07-15 23:41:09.173171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.232 [2024-07-15 23:41:09.181465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.232 [2024-07-15 23:41:09.181484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.232 [2024-07-15 23:41:09.190375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.232 [2024-07-15 23:41:09.190393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.232 [2024-07-15 23:41:09.199018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.232 [2024-07-15 23:41:09.199037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.208416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.208435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.217360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.217378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.226898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.226918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.236844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.236863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.245640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.245658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.255006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.255025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.264257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.264275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.273054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.273072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.282223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.282249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.491 [2024-07-15 23:41:09.290802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.491 [2024-07-15 23:41:09.290821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.300016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.300034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.309181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.309199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.318302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.318321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.326895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.326913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.335869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.335887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.345156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.345174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.354243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.354261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.363400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.363419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.372015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.372033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.381143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.381161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.390274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.390293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.399569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.399587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.408825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.408843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.417979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.417998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.426745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.426764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.436004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.436024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.445174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.445193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.454537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.454557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.492 [2024-07-15 23:41:09.463749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.492 [2024-07-15 23:41:09.463769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.472392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.472413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.480997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.481017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.489827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.489847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.498001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.498020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.507443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.507461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.514424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.514442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.525424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.525443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.534341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.534360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.543698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.543717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.553044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.553062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.562243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.562262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.571275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.571294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.580338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.580360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.589365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.589384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.598044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.598063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.607163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.607181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.615819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.615839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.624498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.624516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.633393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.633412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.642572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.642591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.651879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.651900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.660389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.660407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.669315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.669334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.678403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.678422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.687514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.687533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.696979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.696999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.705147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.705166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.713690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.713710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.752 [2024-07-15 23:41:09.722190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.752 [2024-07-15 23:41:09.722209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.730823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.730842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.740302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.740320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.747496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.747519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.758168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.758188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.766816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.766835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.776120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.776140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.785891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.785910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.794523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.794542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.803579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.803597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.812085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.812104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.821302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.821323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.830564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.830582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.839836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.839854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.848238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.848256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.857414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.857432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.866462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.866480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.875576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.875594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.884203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.884222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.893089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.893107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.901760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.901779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.910522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.910543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 00:15:21.049 Latency(us) 00:15:21.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.049 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:21.049 Nvme1n1 : 5.01 16553.72 129.33 0.00 0.00 7725.78 2535.96 49465.43 00:15:21.049 =================================================================================================================== 00:15:21.049 Total : 16553.72 129.33 0.00 0.00 7725.78 2535.96 49465.43 00:15:21.049 [2024-07-15 23:41:09.919606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.919625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.925021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.925037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.933043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.933058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.941067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.941079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.949092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.949111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.957112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.957126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.965136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.965150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.973153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.973167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.981173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.981187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.989195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.989209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:09.997215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:09.997236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.049 [2024-07-15 23:41:10.005247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.049 [2024-07-15 23:41:10.005262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.050 [2024-07-15 23:41:10.013264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.050 [2024-07-15 23:41:10.013277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.050 [2024-07-15 23:41:10.021285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.050 [2024-07-15 23:41:10.021298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.029298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.029310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.037322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.037339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.045345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.045358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.053373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.053394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.061382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.061393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.069405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.069416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.077428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.077439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.085448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.085461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.093470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.093480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 [2024-07-15 23:41:10.101492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.309 [2024-07-15 23:41:10.101502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (980639) - No such process 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 980639 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:21.309 delay0 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:21.309 23:41:10 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:21.309 [2024-07-15 23:41:10.222178] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:27.879 Initializing NVMe Controllers 00:15:27.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:27.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:27.879 Initialization complete. Launching workers. 00:15:27.879 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 116 00:15:27.879 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 402, failed to submit 34 00:15:27.879 success 231, unsuccess 171, failed 0 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.879 rmmod nvme_tcp 00:15:27.879 rmmod nvme_fabrics 00:15:27.879 rmmod nvme_keyring 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 978731 ']' 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 978731 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@942 -- # '[' -z 978731 ']' 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # kill -0 978731 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # uname 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 978731 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@960 -- # echo 'killing process with pid 978731' 00:15:27.879 killing process with pid 978731 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@961 -- # kill 978731 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # wait 978731 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.879 23:41:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.785 23:41:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.785 00:15:29.785 real 0m31.438s 00:15:29.785 user 0m42.472s 00:15:29.785 sys 0m10.312s 00:15:29.785 23:41:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:29.785 23:41:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 ************************************ 00:15:29.785 END TEST nvmf_zcopy 00:15:29.785 ************************************ 00:15:29.785 23:41:18 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:15:29.785 23:41:18 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:29.785 23:41:18 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:15:29.786 23:41:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:29.786 23:41:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.786 ************************************ 00:15:29.786 START TEST nvmf_nmic 00:15:29.786 ************************************ 00:15:29.786 23:41:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:30.045 * Looking for test storage... 00:15:30.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.045 23:41:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:35.319 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:35.319 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:35.319 Found net devices under 0000:86:00.0: cvl_0_0 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:35.319 Found net devices under 0000:86:00.1: cvl_0_1 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:15:35.319 00:15:35.319 --- 10.0.0.2 ping statistics --- 00:15:35.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.319 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:15:35.319 23:41:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:15:35.319 00:15:35.319 --- 10.0.0.1 ping statistics --- 00:15:35.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.319 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=985989 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 985989 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@823 -- # '[' -z 985989 ']' 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:35.319 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:35.319 [2024-07-15 23:41:24.085416] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:15:35.319 [2024-07-15 23:41:24.085462] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.319 [2024-07-15 23:41:24.143904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:35.319 [2024-07-15 23:41:24.226351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.319 [2024-07-15 23:41:24.226386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.319 [2024-07-15 23:41:24.226393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.319 [2024-07-15 23:41:24.226399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.319 [2024-07-15 23:41:24.226405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.319 [2024-07-15 23:41:24.226453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.319 [2024-07-15 23:41:24.226530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.319 [2024-07-15 23:41:24.226615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:35.319 [2024-07-15 23:41:24.226616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # return 0 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 [2024-07-15 23:41:24.942106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 Malloc0 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 [2024-07-15 23:41:24.993824] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:36.254 test case1: single bdev can't be used in multiple subsystems 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:24 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.254 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.254 [2024-07-15 23:41:25.017743] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:36.254 [2024-07-15 23:41:25.017765] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:36.254 [2024-07-15 23:41:25.017772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:36.254 request: 00:15:36.254 { 00:15:36.254 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:36.255 "namespace": { 00:15:36.255 "bdev_name": "Malloc0", 00:15:36.255 "no_auto_visible": false 00:15:36.255 }, 00:15:36.255 "method": "nvmf_subsystem_add_ns", 00:15:36.255 "req_id": 1 00:15:36.255 } 00:15:36.255 Got JSON-RPC error response 00:15:36.255 response: 00:15:36.255 { 00:15:36.255 "code": -32602, 00:15:36.255 "message": "Invalid parameters" 00:15:36.255 } 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:36.255 Adding namespace failed - expected result. 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:36.255 test case2: host connect to nvmf target in multiple paths 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:36.255 [2024-07-15 23:41:25.029880] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.255 23:41:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.627 23:41:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:38.564 23:41:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.564 23:41:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1192 -- # local i=0 00:15:38.564 23:41:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.564 23:41:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:15:38.564 23:41:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # sleep 2 00:15:40.467 23:41:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:15:40.467 23:41:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:15:40.467 23:41:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.467 23:41:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:15:40.467 23:41:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.467 23:41:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # return 0 00:15:40.467 23:41:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:40.733 [global] 00:15:40.733 thread=1 00:15:40.733 invalidate=1 00:15:40.733 rw=write 00:15:40.733 time_based=1 00:15:40.733 runtime=1 00:15:40.733 ioengine=libaio 00:15:40.733 direct=1 00:15:40.733 bs=4096 00:15:40.733 iodepth=1 00:15:40.733 norandommap=0 00:15:40.733 numjobs=1 00:15:40.733 00:15:40.733 verify_dump=1 00:15:40.733 verify_backlog=512 00:15:40.733 verify_state_save=0 00:15:40.733 do_verify=1 00:15:40.733 verify=crc32c-intel 00:15:40.733 [job0] 00:15:40.733 filename=/dev/nvme0n1 00:15:40.733 Could not set queue depth (nvme0n1) 00:15:40.990 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:40.990 fio-3.35 00:15:40.990 Starting 1 thread 00:15:41.921 00:15:41.921 job0: (groupid=0, jobs=1): err= 0: pid=987073: Mon Jul 15 23:41:30 2024 00:15:41.921 read: IOPS=917, BW=3672KiB/s (3760kB/s)(3716KiB/1012msec) 00:15:41.921 slat (nsec): min=6206, max=27583, avg=7266.36, stdev=2110.67 00:15:41.921 clat (usec): min=281, max=41361, avg=794.37, stdev=4405.28 00:15:41.921 lat (usec): min=287, max=41370, avg=801.63, stdev=4406.72 00:15:41.921 clat percentiles (usec): 00:15:41.921 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 306], 00:15:41.921 | 30.00th=[ 310], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 314], 00:15:41.921 | 70.00th=[ 318], 80.00th=[ 318], 90.00th=[ 322], 95.00th=[ 326], 00:15:41.921 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:41.921 | 99.99th=[41157] 00:15:41.921 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:15:41.921 slat (usec): min=9, max=22865, avg=33.41, stdev=714.21 00:15:41.921 clat (usec): min=167, max=428, avg=220.19, stdev=44.49 00:15:41.921 lat (usec): min=178, max=23249, avg=253.60, stdev=720.67 00:15:41.921 clat percentiles (usec): 00:15:41.921 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:15:41.921 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 219], 00:15:41.921 | 70.00th=[ 229], 80.00th=[ 247], 90.00th=[ 306], 95.00th=[ 310], 00:15:41.921 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 383], 99.95th=[ 429], 00:15:41.921 | 99.99th=[ 429] 00:15:41.921 bw ( KiB/s): min= 408, max= 7784, per=100.00%, avg=4096.00, stdev=5215.62, samples=2 00:15:41.921 iops : min= 102, max= 1946, avg=1024.00, stdev=1303.90, samples=2 00:15:41.921 lat (usec) : 250=42.09%, 500=57.30%, 750=0.05% 00:15:41.921 lat (msec) : 50=0.56% 00:15:41.921 cpu : usr=1.09%, sys=2.18%, ctx=1955, majf=0, minf=2 00:15:41.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:41.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.921 issued rwts: total=929,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.922 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:41.922 00:15:41.922 Run status group 0 (all jobs): 00:15:41.922 READ: bw=3672KiB/s (3760kB/s), 3672KiB/s-3672KiB/s (3760kB/s-3760kB/s), io=3716KiB (3805kB), run=1012-1012msec 00:15:41.922 WRITE: bw=4047KiB/s (4145kB/s), 4047KiB/s-4047KiB/s (4145kB/s-4145kB/s), io=4096KiB (4194kB), run=1012-1012msec 00:15:41.922 00:15:41.922 Disk stats (read/write): 00:15:41.922 nvme0n1: ios=952/1024, merge=0/0, ticks=1598/213, in_queue=1811, util=98.30% 00:15:41.922 23:41:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1213 -- # local i=0 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1225 -- # return 0 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:42.179 rmmod nvme_tcp 00:15:42.179 rmmod nvme_fabrics 00:15:42.179 rmmod nvme_keyring 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 985989 ']' 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 985989 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@942 -- # '[' -z 985989 ']' 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # kill -0 985989 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # uname 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:15:42.179 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 985989 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@960 -- # echo 'killing process with pid 985989' 00:15:42.437 killing process with pid 985989 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@961 -- # kill 985989 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # wait 985989 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:42.437 23:41:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.969 23:41:33 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:44.969 00:15:44.969 real 0m14.723s 00:15:44.969 user 0m35.403s 00:15:44.969 sys 0m4.692s 00:15:44.969 23:41:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1118 -- # xtrace_disable 00:15:44.969 23:41:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:44.969 ************************************ 00:15:44.969 END TEST nvmf_nmic 00:15:44.969 ************************************ 00:15:44.969 23:41:33 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:15:44.969 23:41:33 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:44.969 23:41:33 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:15:44.969 23:41:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:15:44.969 23:41:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:44.969 ************************************ 00:15:44.969 START TEST nvmf_fio_target 00:15:44.969 ************************************ 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:44.969 * Looking for test storage... 00:15:44.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:44.969 23:41:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:50.244 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:50.244 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:50.244 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:50.245 Found net devices under 0000:86:00.0: cvl_0_0 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:50.245 Found net devices under 0000:86:00.1: cvl_0_1 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.245 23:41:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:15:50.245 00:15:50.245 --- 10.0.0.2 ping statistics --- 00:15:50.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.245 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:15:50.245 00:15:50.245 --- 10.0.0.1 ping statistics --- 00:15:50.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.245 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=990741 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 990741 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@823 -- # '[' -z 990741 ']' 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:15:50.245 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.245 [2024-07-15 23:41:39.131992] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:15:50.245 [2024-07-15 23:41:39.132035] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.245 [2024-07-15 23:41:39.190514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:50.515 [2024-07-15 23:41:39.267382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.515 [2024-07-15 23:41:39.267422] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.515 [2024-07-15 23:41:39.267429] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.515 [2024-07-15 23:41:39.267434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.515 [2024-07-15 23:41:39.267439] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.515 [2024-07-15 23:41:39.267492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.515 [2024-07-15 23:41:39.267569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.515 [2024-07-15 23:41:39.267655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:50.515 [2024-07-15 23:41:39.267656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.080 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:15:51.080 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # return 0 00:15:51.080 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.080 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:51.080 23:41:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.080 23:41:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.080 23:41:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:51.337 [2024-07-15 23:41:40.140641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.337 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.595 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:51.595 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.852 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:51.852 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:51.852 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:51.852 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.109 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:52.109 23:41:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:52.367 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.624 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:52.624 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.624 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:52.624 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:52.880 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:52.880 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:53.138 23:41:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:53.138 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:53.138 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:53.395 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:53.395 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:53.652 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.652 [2024-07-15 23:41:42.610717] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.909 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:53.909 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:54.181 23:41:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:55.111 23:41:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:55.111 23:41:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1192 -- # local i=0 00:15:55.111 23:41:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.111 23:41:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # [[ -n 4 ]] 00:15:55.111 23:41:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # nvme_device_counter=4 00:15:55.111 23:41:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # sleep 2 00:15:57.632 23:41:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:15:57.632 23:41:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:15:57.632 23:41:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.632 23:41:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_devices=4 00:15:57.632 23:41:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.632 23:41:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # return 0 00:15:57.632 23:41:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:57.632 [global] 00:15:57.632 thread=1 00:15:57.632 invalidate=1 00:15:57.632 rw=write 00:15:57.632 time_based=1 00:15:57.632 runtime=1 00:15:57.632 ioengine=libaio 00:15:57.632 direct=1 00:15:57.632 bs=4096 00:15:57.632 iodepth=1 00:15:57.632 norandommap=0 00:15:57.632 numjobs=1 00:15:57.632 00:15:57.632 verify_dump=1 00:15:57.632 verify_backlog=512 00:15:57.632 verify_state_save=0 00:15:57.632 do_verify=1 00:15:57.632 verify=crc32c-intel 00:15:57.632 [job0] 00:15:57.632 filename=/dev/nvme0n1 00:15:57.632 [job1] 00:15:57.632 filename=/dev/nvme0n2 00:15:57.632 [job2] 00:15:57.632 filename=/dev/nvme0n3 00:15:57.633 [job3] 00:15:57.633 filename=/dev/nvme0n4 00:15:57.633 Could not set queue depth (nvme0n1) 00:15:57.633 Could not set queue depth (nvme0n2) 00:15:57.633 Could not set queue depth (nvme0n3) 00:15:57.633 Could not set queue depth (nvme0n4) 00:15:57.633 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.633 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.633 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.633 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:57.633 fio-3.35 00:15:57.633 Starting 4 threads 00:15:59.003 00:15:59.003 job0: (groupid=0, jobs=1): err= 0: pid=992168: Mon Jul 15 23:41:47 2024 00:15:59.003 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:15:59.003 slat (nsec): min=10449, max=22671, avg=21538.41, stdev=2533.88 00:15:59.003 clat (usec): min=40852, max=41981, avg=41131.59, stdev=358.79 00:15:59.003 lat (usec): min=40874, max=42003, avg=41153.13, stdev=358.35 00:15:59.003 clat percentiles (usec): 00:15:59.003 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:59.003 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:59.003 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:59.003 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:59.003 | 99.99th=[42206] 00:15:59.003 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:15:59.003 slat (nsec): min=8992, max=39665, avg=10432.79, stdev=2296.39 00:15:59.003 clat (usec): min=172, max=381, avg=212.32, stdev=22.05 00:15:59.004 lat (usec): min=182, max=420, avg=222.75, stdev=22.63 00:15:59.004 clat percentiles (usec): 00:15:59.004 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:15:59.004 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:15:59.004 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 249], 00:15:59.004 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 383], 99.95th=[ 383], 00:15:59.004 | 99.99th=[ 383] 00:15:59.004 bw ( KiB/s): min= 4096, max= 4096, per=31.52%, avg=4096.00, stdev= 0.00, samples=1 00:15:59.004 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:59.004 lat (usec) : 250=91.39%, 500=4.49% 00:15:59.004 lat (msec) : 50=4.12% 00:15:59.004 cpu : usr=0.29%, sys=0.49%, ctx=534, majf=0, minf=1 00:15:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.004 job1: (groupid=0, jobs=1): err= 0: pid=992169: Mon Jul 15 23:41:47 2024 00:15:59.004 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:15:59.004 slat (nsec): min=9781, max=24074, avg=21941.36, stdev=2753.62 00:15:59.004 clat (usec): min=40899, max=42492, avg=41086.93, stdev=385.00 00:15:59.004 lat (usec): min=40921, max=42502, avg=41108.87, stdev=382.75 00:15:59.004 clat percentiles (usec): 00:15:59.004 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:59.004 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:59.004 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:15:59.004 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:15:59.004 | 99.99th=[42730] 00:15:59.004 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:15:59.004 slat (nsec): min=8940, max=36282, avg=10555.05, stdev=2153.45 00:15:59.004 clat (usec): min=154, max=386, avg=219.70, stdev=31.08 00:15:59.004 lat (usec): min=164, max=405, avg=230.26, stdev=31.70 00:15:59.004 clat percentiles (usec): 00:15:59.004 | 1.00th=[ 163], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 198], 00:15:59.004 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:15:59.004 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 277], 00:15:59.004 | 99.00th=[ 330], 99.50th=[ 375], 99.90th=[ 388], 99.95th=[ 388], 00:15:59.004 | 99.99th=[ 388] 00:15:59.004 bw ( KiB/s): min= 4096, max= 4096, per=31.52%, avg=4096.00, stdev= 0.00, samples=1 00:15:59.004 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:59.004 lat (usec) : 250=83.90%, 500=11.99% 00:15:59.004 lat (msec) : 50=4.12% 00:15:59.004 cpu : usr=0.39%, sys=0.39%, ctx=534, majf=0, minf=2 00:15:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.004 job2: (groupid=0, jobs=1): err= 0: pid=992170: Mon Jul 15 23:41:47 2024 00:15:59.004 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:59.004 slat (nsec): min=6689, max=26407, avg=7544.87, stdev=1227.58 00:15:59.004 clat (usec): min=336, max=643, avg=378.43, stdev=25.28 00:15:59.004 lat (usec): min=343, max=666, avg=385.98, stdev=25.55 00:15:59.004 clat percentiles (usec): 00:15:59.004 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 363], 00:15:59.004 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 379], 00:15:59.004 | 70.00th=[ 383], 80.00th=[ 392], 90.00th=[ 404], 95.00th=[ 412], 00:15:59.004 | 99.00th=[ 478], 99.50th=[ 515], 99.90th=[ 644], 99.95th=[ 644], 00:15:59.004 | 99.99th=[ 644] 00:15:59.004 write: IOPS=1789, BW=7157KiB/s (7329kB/s)(7164KiB/1001msec); 0 zone resets 00:15:59.004 slat (usec): min=8, max=22939, avg=24.71, stdev=541.82 00:15:59.004 clat (usec): min=145, max=446, avg=198.16, stdev=31.53 00:15:59.004 lat (usec): min=171, max=23335, avg=222.87, stdev=547.52 00:15:59.004 clat percentiles (usec): 00:15:59.004 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:15:59.004 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:15:59.004 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 229], 95.00th=[ 253], 00:15:59.004 | 99.00th=[ 343], 99.50th=[ 375], 99.90th=[ 437], 99.95th=[ 445], 00:15:59.004 | 99.99th=[ 445] 00:15:59.004 bw ( KiB/s): min= 8192, max= 8192, per=63.03%, avg=8192.00, stdev= 0.00, samples=1 00:15:59.004 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:59.004 lat (usec) : 250=51.01%, 500=48.69%, 750=0.30% 00:15:59.004 cpu : usr=1.70%, sys=3.30%, ctx=3330, majf=0, minf=1 00:15:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 issued rwts: total=1536,1791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.004 job3: (groupid=0, jobs=1): err= 0: pid=992171: Mon Jul 15 23:41:47 2024 00:15:59.004 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:15:59.004 slat (nsec): min=9293, max=23268, avg=21712.77, stdev=2847.33 00:15:59.004 clat (usec): min=40871, max=41123, avg=40967.11, stdev=62.50 00:15:59.004 lat (usec): min=40894, max=41144, avg=40988.82, stdev=62.42 00:15:59.004 clat percentiles (usec): 00:15:59.004 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:59.004 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:59.004 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:59.004 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:59.004 | 99.99th=[41157] 00:15:59.004 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:15:59.004 slat (nsec): min=9110, max=82133, avg=10995.53, stdev=4232.02 00:15:59.004 clat (usec): min=178, max=483, avg=223.89, stdev=35.60 00:15:59.004 lat (usec): min=188, max=513, avg=234.88, stdev=37.91 00:15:59.004 clat percentiles (usec): 00:15:59.004 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 204], 00:15:59.004 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:15:59.004 | 70.00th=[ 227], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 265], 00:15:59.004 | 99.00th=[ 396], 99.50th=[ 453], 99.90th=[ 486], 99.95th=[ 486], 00:15:59.004 | 99.99th=[ 486] 00:15:59.004 bw ( KiB/s): min= 4096, max= 4096, per=31.52%, avg=4096.00, stdev= 0.00, samples=1 00:15:59.004 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:59.004 lat (usec) : 250=85.77%, 500=10.11% 00:15:59.004 lat (msec) : 50=4.12% 00:15:59.004 cpu : usr=0.68%, sys=0.20%, ctx=534, majf=0, minf=1 00:15:59.004 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.004 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.004 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.004 00:15:59.004 Run status group 0 (all jobs): 00:15:59.004 READ: bw=6258KiB/s (6408kB/s), 85.9KiB/s-6138KiB/s (88.0kB/s-6285kB/s), io=6408KiB (6562kB), run=1001-1024msec 00:15:59.004 WRITE: bw=12.7MiB/s (13.3MB/s), 2000KiB/s-7157KiB/s (2048kB/s-7329kB/s), io=13.0MiB (13.6MB), run=1001-1024msec 00:15:59.004 00:15:59.004 Disk stats (read/write): 00:15:59.004 nvme0n1: ios=66/512, merge=0/0, ticks=693/105, in_queue=798, util=82.36% 00:15:59.004 nvme0n2: ios=66/512, merge=0/0, ticks=739/108, in_queue=847, util=86.41% 00:15:59.004 nvme0n3: ios=1188/1536, merge=0/0, ticks=1306/301, in_queue=1607, util=94.89% 00:15:59.004 nvme0n4: ios=73/512, merge=0/0, ticks=744/110, in_queue=854, util=95.35% 00:15:59.004 23:41:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:59.004 [global] 00:15:59.004 thread=1 00:15:59.004 invalidate=1 00:15:59.004 rw=randwrite 00:15:59.004 time_based=1 00:15:59.004 runtime=1 00:15:59.004 ioengine=libaio 00:15:59.004 direct=1 00:15:59.004 bs=4096 00:15:59.004 iodepth=1 00:15:59.004 norandommap=0 00:15:59.004 numjobs=1 00:15:59.004 00:15:59.004 verify_dump=1 00:15:59.004 verify_backlog=512 00:15:59.004 verify_state_save=0 00:15:59.004 do_verify=1 00:15:59.004 verify=crc32c-intel 00:15:59.004 [job0] 00:15:59.004 filename=/dev/nvme0n1 00:15:59.004 [job1] 00:15:59.004 filename=/dev/nvme0n2 00:15:59.004 [job2] 00:15:59.004 filename=/dev/nvme0n3 00:15:59.004 [job3] 00:15:59.004 filename=/dev/nvme0n4 00:15:59.004 Could not set queue depth (nvme0n1) 00:15:59.004 Could not set queue depth (nvme0n2) 00:15:59.004 Could not set queue depth (nvme0n3) 00:15:59.004 Could not set queue depth (nvme0n4) 00:15:59.262 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.262 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.262 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.262 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:59.262 fio-3.35 00:15:59.262 Starting 4 threads 00:16:00.637 00:16:00.637 job0: (groupid=0, jobs=1): err= 0: pid=992546: Mon Jul 15 23:41:49 2024 00:16:00.637 read: IOPS=898, BW=3594KiB/s (3680kB/s)(3680KiB/1024msec) 00:16:00.637 slat (nsec): min=7646, max=81880, avg=8846.57, stdev=3283.42 00:16:00.637 clat (usec): min=256, max=41319, avg=841.68, stdev=4423.43 00:16:00.637 lat (usec): min=264, max=41329, avg=850.52, stdev=4424.23 00:16:00.637 clat percentiles (usec): 00:16:00.637 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:16:00.637 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 338], 00:16:00.637 | 70.00th=[ 355], 80.00th=[ 396], 90.00th=[ 486], 95.00th=[ 498], 00:16:00.637 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:00.637 | 99.99th=[41157] 00:16:00.638 write: IOPS=1000, BW=4000KiB/s (4096kB/s)(4096KiB/1024msec); 0 zone resets 00:16:00.638 slat (nsec): min=7738, max=86913, avg=12247.09, stdev=3342.51 00:16:00.638 clat (usec): min=146, max=422, avg=217.15, stdev=27.78 00:16:00.638 lat (usec): min=157, max=434, avg=229.40, stdev=28.66 00:16:00.638 clat percentiles (usec): 00:16:00.638 | 1.00th=[ 163], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:16:00.638 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:16:00.638 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 281], 00:16:00.638 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 367], 99.95th=[ 424], 00:16:00.638 | 99.99th=[ 424] 00:16:00.638 bw ( KiB/s): min= 8192, max= 8192, per=45.60%, avg=8192.00, stdev= 0.00, samples=1 00:16:00.638 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:00.638 lat (usec) : 250=47.94%, 500=49.79%, 750=1.70% 00:16:00.638 lat (msec) : 50=0.57% 00:16:00.638 cpu : usr=1.47%, sys=3.23%, ctx=1946, majf=0, minf=2 00:16:00.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 issued rwts: total=920,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.638 job1: (groupid=0, jobs=1): err= 0: pid=992547: Mon Jul 15 23:41:49 2024 00:16:00.638 read: IOPS=21, BW=85.8KiB/s (87.8kB/s)(88.0KiB/1026msec) 00:16:00.638 slat (nsec): min=11920, max=29688, avg=18294.23, stdev=5159.57 00:16:00.638 clat (usec): min=40759, max=41114, avg=40970.21, stdev=73.33 00:16:00.638 lat (usec): min=40771, max=41135, avg=40988.51, stdev=74.32 00:16:00.638 clat percentiles (usec): 00:16:00.638 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:00.638 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:00.638 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:00.638 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:00.638 | 99.99th=[41157] 00:16:00.638 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:16:00.638 slat (nsec): min=10333, max=51089, avg=12769.05, stdev=3088.78 00:16:00.638 clat (usec): min=188, max=1137, avg=226.30, stdev=46.83 00:16:00.638 lat (usec): min=201, max=1148, avg=239.07, stdev=47.47 00:16:00.638 clat percentiles (usec): 00:16:00.638 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:16:00.638 | 30.00th=[ 215], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 225], 00:16:00.638 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 258], 00:16:00.638 | 99.00th=[ 293], 99.50th=[ 330], 99.90th=[ 1139], 99.95th=[ 1139], 00:16:00.638 | 99.99th=[ 1139] 00:16:00.638 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:00.638 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:00.638 lat (usec) : 250=88.39%, 500=7.12%, 750=0.19% 00:16:00.638 lat (msec) : 2=0.19%, 50=4.12% 00:16:00.638 cpu : usr=0.68%, sys=0.68%, ctx=535, majf=0, minf=1 00:16:00.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.638 job2: (groupid=0, jobs=1): err= 0: pid=992548: Mon Jul 15 23:41:49 2024 00:16:00.638 read: IOPS=511, BW=2045KiB/s (2094kB/s)(2084KiB/1019msec) 00:16:00.638 slat (nsec): min=7378, max=25155, avg=8669.54, stdev=2406.37 00:16:00.638 clat (usec): min=289, max=41360, avg=1483.03, stdev=6574.21 00:16:00.638 lat (usec): min=297, max=41371, avg=1491.69, stdev=6576.18 00:16:00.638 clat percentiles (usec): 00:16:00.638 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 330], 00:16:00.638 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 392], 00:16:00.638 | 70.00th=[ 461], 80.00th=[ 482], 90.00th=[ 494], 95.00th=[ 510], 00:16:00.638 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:00.638 | 99.99th=[41157] 00:16:00.638 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:16:00.638 slat (nsec): min=8927, max=38169, avg=11604.37, stdev=2747.22 00:16:00.638 clat (usec): min=166, max=3433, avg=220.44, stdev=103.63 00:16:00.638 lat (usec): min=177, max=3444, avg=232.05, stdev=103.80 00:16:00.638 clat percentiles (usec): 00:16:00.638 | 1.00th=[ 180], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:16:00.638 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:16:00.638 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 245], 95.00th=[ 262], 00:16:00.638 | 99.00th=[ 310], 99.50th=[ 347], 99.90th=[ 420], 99.95th=[ 3425], 00:16:00.638 | 99.99th=[ 3425] 00:16:00.638 bw ( KiB/s): min= 4096, max= 4096, per=22.80%, avg=4096.00, stdev= 0.00, samples=2 00:16:00.638 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:16:00.638 lat (usec) : 250=61.04%, 500=36.38%, 750=1.62% 00:16:00.638 lat (msec) : 4=0.06%, 50=0.91% 00:16:00.638 cpu : usr=0.88%, sys=1.77%, ctx=1545, majf=0, minf=1 00:16:00.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 issued rwts: total=521,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.638 job3: (groupid=0, jobs=1): err= 0: pid=992549: Mon Jul 15 23:41:49 2024 00:16:00.638 read: IOPS=2040, BW=8164KiB/s (8360kB/s)(8172KiB/1001msec) 00:16:00.638 slat (nsec): min=7361, max=23291, avg=8507.48, stdev=1233.58 00:16:00.638 clat (usec): min=202, max=664, avg=269.53, stdev=36.64 00:16:00.638 lat (usec): min=210, max=672, avg=278.04, stdev=36.80 00:16:00.638 clat percentiles (usec): 00:16:00.638 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 247], 00:16:00.638 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:16:00.638 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 318], 00:16:00.638 | 99.00th=[ 445], 99.50th=[ 461], 99.90th=[ 494], 99.95th=[ 594], 00:16:00.638 | 99.99th=[ 668] 00:16:00.638 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:00.638 slat (nsec): min=10693, max=38144, avg=12362.22, stdev=1667.93 00:16:00.638 clat (usec): min=143, max=1349, avg=192.41, stdev=35.02 00:16:00.638 lat (usec): min=154, max=1361, avg=204.77, stdev=35.51 00:16:00.638 clat percentiles (usec): 00:16:00.638 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 172], 00:16:00.638 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:16:00.638 | 70.00th=[ 204], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 233], 00:16:00.638 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 293], 99.95th=[ 429], 00:16:00.638 | 99.99th=[ 1352] 00:16:00.638 bw ( KiB/s): min= 8192, max= 8192, per=45.60%, avg=8192.00, stdev= 0.00, samples=1 00:16:00.638 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:00.638 lat (usec) : 250=62.11%, 500=37.81%, 750=0.05% 00:16:00.638 lat (msec) : 2=0.02% 00:16:00.638 cpu : usr=3.50%, sys=6.60%, ctx=4092, majf=0, minf=1 00:16:00.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:00.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.638 issued rwts: total=2043,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:00.638 00:16:00.638 Run status group 0 (all jobs): 00:16:00.638 READ: bw=13.3MiB/s (14.0MB/s), 85.8KiB/s-8164KiB/s (87.8kB/s-8360kB/s), io=13.7MiB (14.4MB), run=1001-1026msec 00:16:00.638 WRITE: bw=17.5MiB/s (18.4MB/s), 1996KiB/s-8184KiB/s (2044kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1026msec 00:16:00.638 00:16:00.638 Disk stats (read/write): 00:16:00.638 nvme0n1: ios=971/1024, merge=0/0, ticks=794/199, in_queue=993, util=87.47% 00:16:00.638 nvme0n2: ios=57/512, merge=0/0, ticks=1340/107, in_queue=1447, util=97.06% 00:16:00.638 nvme0n3: ios=574/1024, merge=0/0, ticks=658/223, in_queue=881, util=90.97% 00:16:00.638 nvme0n4: ios=1560/1990, merge=0/0, ticks=1350/357, in_queue=1707, util=98.74% 00:16:00.638 23:41:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:00.638 [global] 00:16:00.638 thread=1 00:16:00.638 invalidate=1 00:16:00.638 rw=write 00:16:00.638 time_based=1 00:16:00.638 runtime=1 00:16:00.638 ioengine=libaio 00:16:00.638 direct=1 00:16:00.638 bs=4096 00:16:00.638 iodepth=128 00:16:00.638 norandommap=0 00:16:00.638 numjobs=1 00:16:00.638 00:16:00.638 verify_dump=1 00:16:00.638 verify_backlog=512 00:16:00.638 verify_state_save=0 00:16:00.638 do_verify=1 00:16:00.638 verify=crc32c-intel 00:16:00.638 [job0] 00:16:00.638 filename=/dev/nvme0n1 00:16:00.638 [job1] 00:16:00.638 filename=/dev/nvme0n2 00:16:00.638 [job2] 00:16:00.638 filename=/dev/nvme0n3 00:16:00.638 [job3] 00:16:00.638 filename=/dev/nvme0n4 00:16:00.638 Could not set queue depth (nvme0n1) 00:16:00.638 Could not set queue depth (nvme0n2) 00:16:00.638 Could not set queue depth (nvme0n3) 00:16:00.638 Could not set queue depth (nvme0n4) 00:16:00.896 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.896 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.896 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.896 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:00.896 fio-3.35 00:16:00.896 Starting 4 threads 00:16:02.261 00:16:02.261 job0: (groupid=0, jobs=1): err= 0: pid=992917: Mon Jul 15 23:41:50 2024 00:16:02.261 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:16:02.261 slat (nsec): min=1028, max=28698k, avg=108549.28, stdev=859859.38 00:16:02.261 clat (usec): min=1010, max=141916, avg=14477.34, stdev=14713.47 00:16:02.261 lat (usec): min=1013, max=141924, avg=14585.89, stdev=14830.22 00:16:02.261 clat percentiles (usec): 00:16:02.261 | 1.00th=[ 1450], 5.00th=[ 5866], 10.00th=[ 8094], 20.00th=[ 9503], 00:16:02.261 | 30.00th=[ 10290], 40.00th=[ 10945], 50.00th=[ 11731], 60.00th=[ 12649], 00:16:02.261 | 70.00th=[ 13435], 80.00th=[ 14615], 90.00th=[ 17433], 95.00th=[ 30016], 00:16:02.261 | 99.00th=[106431], 99.50th=[124257], 99.90th=[141558], 99.95th=[141558], 00:16:02.261 | 99.99th=[141558] 00:16:02.261 write: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1003msec); 0 zone resets 00:16:02.261 slat (nsec): min=1823, max=13008k, avg=113147.94, stdev=705962.29 00:16:02.261 clat (usec): min=441, max=141919, avg=16118.70, stdev=17844.89 00:16:02.261 lat (msec): min=3, max=141, avg=16.23, stdev=17.94 00:16:02.261 clat percentiles (msec): 00:16:02.261 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:16:02.261 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:16:02.261 | 70.00th=[ 15], 80.00th=[ 17], 90.00th=[ 24], 95.00th=[ 35], 00:16:02.261 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 125], 00:16:02.261 | 99.99th=[ 142] 00:16:02.262 bw ( KiB/s): min=13736, max=19056, per=25.39%, avg=16396.00, stdev=3761.81, samples=2 00:16:02.262 iops : min= 3434, max= 4764, avg=4099.00, stdev=940.45, samples=2 00:16:02.262 lat (usec) : 500=0.01% 00:16:02.262 lat (msec) : 2=0.72%, 4=1.16%, 10=27.67%, 20=59.11%, 50=8.03% 00:16:02.262 lat (msec) : 100=1.59%, 250=1.71% 00:16:02.262 cpu : usr=2.30%, sys=4.59%, ctx=359, majf=0, minf=1 00:16:02.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:02.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.262 issued rwts: total=4096,4201,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.262 job1: (groupid=0, jobs=1): err= 0: pid=992923: Mon Jul 15 23:41:50 2024 00:16:02.262 read: IOPS=3889, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1004msec) 00:16:02.262 slat (nsec): min=1004, max=17435k, avg=112342.63, stdev=733703.75 00:16:02.262 clat (usec): min=677, max=88495, avg=14217.97, stdev=9831.68 00:16:02.262 lat (usec): min=2631, max=90001, avg=14330.31, stdev=9893.02 00:16:02.262 clat percentiles (usec): 00:16:02.262 | 1.00th=[ 3261], 5.00th=[ 4817], 10.00th=[ 6783], 20.00th=[ 8291], 00:16:02.262 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10945], 60.00th=[11600], 00:16:02.262 | 70.00th=[14353], 80.00th=[20841], 90.00th=[25297], 95.00th=[30802], 00:16:02.262 | 99.00th=[53740], 99.50th=[68682], 99.90th=[82314], 99.95th=[82314], 00:16:02.262 | 99.99th=[88605] 00:16:02.262 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:16:02.262 slat (nsec): min=1832, max=19184k, avg=131412.56, stdev=798301.73 00:16:02.262 clat (usec): min=851, max=119354, avg=17500.10, stdev=21099.28 00:16:02.262 lat (usec): min=858, max=119367, avg=17631.52, stdev=21236.06 00:16:02.262 clat percentiles (msec): 00:16:02.262 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 7], 00:16:02.262 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 12], 00:16:02.262 | 70.00th=[ 14], 80.00th=[ 23], 90.00th=[ 39], 95.00th=[ 59], 00:16:02.262 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 114], 00:16:02.262 | 99.99th=[ 120] 00:16:02.262 bw ( KiB/s): min=12992, max=19776, per=25.38%, avg=16384.00, stdev=4797.01, samples=2 00:16:02.262 iops : min= 3248, max= 4944, avg=4096.00, stdev=1199.25, samples=2 00:16:02.262 lat (usec) : 750=0.01%, 1000=0.04% 00:16:02.262 lat (msec) : 2=0.16%, 4=3.54%, 10=42.44%, 20=31.00%, 50=18.60% 00:16:02.262 lat (msec) : 100=3.04%, 250=1.17% 00:16:02.262 cpu : usr=1.89%, sys=3.09%, ctx=580, majf=0, minf=1 00:16:02.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:02.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.262 issued rwts: total=3905,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.262 job2: (groupid=0, jobs=1): err= 0: pid=992925: Mon Jul 15 23:41:50 2024 00:16:02.262 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:16:02.262 slat (nsec): min=1534, max=13366k, avg=102968.89, stdev=786264.51 00:16:02.262 clat (usec): min=6170, max=31215, avg=13986.26, stdev=3845.00 00:16:02.262 lat (usec): min=6185, max=31219, avg=14089.22, stdev=3908.83 00:16:02.262 clat percentiles (usec): 00:16:02.262 | 1.00th=[ 6587], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11076], 00:16:02.262 | 30.00th=[11863], 40.00th=[12387], 50.00th=[13304], 60.00th=[14091], 00:16:02.262 | 70.00th=[15533], 80.00th=[16909], 90.00th=[18744], 95.00th=[21365], 00:16:02.262 | 99.00th=[27395], 99.50th=[28181], 99.90th=[31327], 99.95th=[31327], 00:16:02.262 | 99.99th=[31327] 00:16:02.262 write: IOPS=4366, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1009msec); 0 zone resets 00:16:02.262 slat (nsec): min=1997, max=16721k, avg=112096.43, stdev=774995.47 00:16:02.262 clat (usec): min=1166, max=106759, avg=16062.39, stdev=16420.84 00:16:02.262 lat (usec): min=1179, max=106772, avg=16174.49, stdev=16524.63 00:16:02.262 clat percentiles (msec): 00:16:02.262 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:16:02.262 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:16:02.262 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 27], 95.00th=[ 47], 00:16:02.262 | 99.00th=[ 103], 99.50th=[ 105], 99.90th=[ 107], 99.95th=[ 107], 00:16:02.262 | 99.99th=[ 107] 00:16:02.262 bw ( KiB/s): min=16384, max=17848, per=26.51%, avg=17116.00, stdev=1035.20, samples=2 00:16:02.262 iops : min= 4096, max= 4462, avg=4279.00, stdev=258.80, samples=2 00:16:02.262 lat (msec) : 2=0.12%, 4=0.60%, 10=23.62%, 20=63.77%, 50=9.68% 00:16:02.262 lat (msec) : 100=1.60%, 250=0.61% 00:16:02.262 cpu : usr=3.77%, sys=5.75%, ctx=304, majf=0, minf=1 00:16:02.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:02.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.262 issued rwts: total=4096,4406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.262 job3: (groupid=0, jobs=1): err= 0: pid=992926: Mon Jul 15 23:41:50 2024 00:16:02.262 read: IOPS=3449, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1005msec) 00:16:02.262 slat (nsec): min=1073, max=25899k, avg=153247.37, stdev=1120643.55 00:16:02.262 clat (usec): min=2152, max=73586, avg=19674.13, stdev=14732.26 00:16:02.262 lat (usec): min=2155, max=73588, avg=19827.38, stdev=14769.97 00:16:02.262 clat percentiles (usec): 00:16:02.262 | 1.00th=[ 4146], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 8848], 00:16:02.262 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12780], 60.00th=[17957], 00:16:02.262 | 70.00th=[23200], 80.00th=[30540], 90.00th=[37487], 95.00th=[45876], 00:16:02.262 | 99.00th=[71828], 99.50th=[71828], 99.90th=[73925], 99.95th=[73925], 00:16:02.262 | 99.99th=[73925] 00:16:02.262 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:16:02.262 slat (nsec): min=1809, max=12105k, avg=126771.11, stdev=694728.53 00:16:02.262 clat (usec): min=2024, max=85356, avg=16450.76, stdev=17860.20 00:16:02.262 lat (usec): min=2032, max=85368, avg=16577.53, stdev=17973.45 00:16:02.262 clat percentiles (usec): 00:16:02.262 | 1.00th=[ 3195], 5.00th=[ 4621], 10.00th=[ 4883], 20.00th=[ 5735], 00:16:02.262 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11600], 00:16:02.262 | 70.00th=[13304], 80.00th=[15795], 90.00th=[45876], 95.00th=[64226], 00:16:02.262 | 99.00th=[80217], 99.50th=[81265], 99.90th=[85459], 99.95th=[85459], 00:16:02.262 | 99.99th=[85459] 00:16:02.262 bw ( KiB/s): min=13336, max=15336, per=22.20%, avg=14336.00, stdev=1414.21, samples=2 00:16:02.262 iops : min= 3334, max= 3834, avg=3584.00, stdev=353.55, samples=2 00:16:02.262 lat (msec) : 4=1.39%, 10=37.64%, 20=31.73%, 50=22.48%, 100=6.76% 00:16:02.262 cpu : usr=1.49%, sys=3.78%, ctx=491, majf=0, minf=1 00:16:02.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:02.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.262 issued rwts: total=3467,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.262 00:16:02.262 Run status group 0 (all jobs): 00:16:02.262 READ: bw=60.3MiB/s (63.2MB/s), 13.5MiB/s-16.0MiB/s (14.1MB/s-16.7MB/s), io=60.8MiB (63.8MB), run=1003-1009msec 00:16:02.262 WRITE: bw=63.1MiB/s (66.1MB/s), 13.9MiB/s-17.1MiB/s (14.6MB/s-17.9MB/s), io=63.6MiB (66.7MB), run=1003-1009msec 00:16:02.262 00:16:02.262 Disk stats (read/write): 00:16:02.262 nvme0n1: ios=3112/3312, merge=0/0, ticks=37178/44537, in_queue=81715, util=96.99% 00:16:02.262 nvme0n2: ios=3606/3678, merge=0/0, ticks=28717/30122, in_queue=58839, util=95.53% 00:16:02.262 nvme0n3: ios=3534/3584, merge=0/0, ticks=49511/56385, in_queue=105896, util=96.36% 00:16:02.262 nvme0n4: ios=2896/3072, merge=0/0, ticks=31559/25479, in_queue=57038, util=98.22% 00:16:02.262 23:41:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:02.262 [global] 00:16:02.262 thread=1 00:16:02.262 invalidate=1 00:16:02.262 rw=randwrite 00:16:02.262 time_based=1 00:16:02.262 runtime=1 00:16:02.262 ioengine=libaio 00:16:02.262 direct=1 00:16:02.262 bs=4096 00:16:02.262 iodepth=128 00:16:02.262 norandommap=0 00:16:02.262 numjobs=1 00:16:02.262 00:16:02.262 verify_dump=1 00:16:02.262 verify_backlog=512 00:16:02.262 verify_state_save=0 00:16:02.262 do_verify=1 00:16:02.262 verify=crc32c-intel 00:16:02.262 [job0] 00:16:02.262 filename=/dev/nvme0n1 00:16:02.262 [job1] 00:16:02.262 filename=/dev/nvme0n2 00:16:02.262 [job2] 00:16:02.262 filename=/dev/nvme0n3 00:16:02.262 [job3] 00:16:02.262 filename=/dev/nvme0n4 00:16:02.262 Could not set queue depth (nvme0n1) 00:16:02.262 Could not set queue depth (nvme0n2) 00:16:02.262 Could not set queue depth (nvme0n3) 00:16:02.262 Could not set queue depth (nvme0n4) 00:16:02.262 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.262 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.262 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.262 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:02.262 fio-3.35 00:16:02.262 Starting 4 threads 00:16:03.629 00:16:03.630 job0: (groupid=0, jobs=1): err= 0: pid=993295: Mon Jul 15 23:41:52 2024 00:16:03.630 read: IOPS=2075, BW=8301KiB/s (8500kB/s)(8384KiB/1010msec) 00:16:03.630 slat (nsec): min=1537, max=25096k, avg=170284.67, stdev=1408924.95 00:16:03.630 clat (usec): min=3257, max=55893, avg=20856.31, stdev=11227.27 00:16:03.630 lat (usec): min=6415, max=55908, avg=21026.60, stdev=11335.90 00:16:03.630 clat percentiles (usec): 00:16:03.630 | 1.00th=[ 6456], 5.00th=[10945], 10.00th=[11076], 20.00th=[11469], 00:16:03.630 | 30.00th=[11863], 40.00th=[13566], 50.00th=[16909], 60.00th=[24249], 00:16:03.630 | 70.00th=[25035], 80.00th=[27657], 90.00th=[37487], 95.00th=[45876], 00:16:03.630 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55837], 99.95th=[55837], 00:16:03.630 | 99.99th=[55837] 00:16:03.630 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:16:03.630 slat (usec): min=2, max=38004, avg=247.72, stdev=1459.11 00:16:03.630 clat (msec): min=3, max=124, avg=32.95, stdev=33.20 00:16:03.630 lat (msec): min=3, max=124, avg=33.20, stdev=33.43 00:16:03.630 clat percentiles (msec): 00:16:03.630 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:16:03.630 | 30.00th=[ 12], 40.00th=[ 16], 50.00th=[ 20], 60.00th=[ 22], 00:16:03.630 | 70.00th=[ 24], 80.00th=[ 56], 90.00th=[ 100], 95.00th=[ 110], 00:16:03.630 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 125], 00:16:03.630 | 99.99th=[ 125] 00:16:03.630 bw ( KiB/s): min= 9384, max=10456, per=13.64%, avg=9920.00, stdev=758.02, samples=2 00:16:03.630 iops : min= 2346, max= 2614, avg=2480.00, stdev=189.50, samples=2 00:16:03.630 lat (msec) : 4=0.28%, 10=10.16%, 20=41.60%, 50=34.79%, 100=8.44% 00:16:03.630 lat (msec) : 250=4.73% 00:16:03.630 cpu : usr=2.87%, sys=2.08%, ctx=245, majf=0, minf=1 00:16:03.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:03.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.630 issued rwts: total=2096,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.630 job1: (groupid=0, jobs=1): err= 0: pid=993296: Mon Jul 15 23:41:52 2024 00:16:03.630 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:16:03.630 slat (nsec): min=1512, max=4919.1k, avg=79498.17, stdev=462681.35 00:16:03.630 clat (usec): min=4463, max=15204, avg=10228.50, stdev=1430.74 00:16:03.630 lat (usec): min=4466, max=15937, avg=10308.00, stdev=1460.37 00:16:03.630 clat percentiles (usec): 00:16:03.630 | 1.00th=[ 7177], 5.00th=[ 7898], 10.00th=[ 8356], 20.00th=[ 9110], 00:16:03.630 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:16:03.630 | 70.00th=[10683], 80.00th=[11207], 90.00th=[12125], 95.00th=[12780], 00:16:03.630 | 99.00th=[14091], 99.50th=[14484], 99.90th=[15008], 99.95th=[15008], 00:16:03.630 | 99.99th=[15139] 00:16:03.630 write: IOPS=6164, BW=24.1MiB/s (25.2MB/s)(24.2MiB/1003msec); 0 zone resets 00:16:03.630 slat (usec): min=2, max=8224, avg=77.51, stdev=464.64 00:16:03.630 clat (usec): min=2532, max=21089, avg=10335.96, stdev=1805.21 00:16:03.630 lat (usec): min=2542, max=22245, avg=10413.47, stdev=1836.00 00:16:03.630 clat percentiles (usec): 00:16:03.630 | 1.00th=[ 6063], 5.00th=[ 7635], 10.00th=[ 8717], 20.00th=[ 9634], 00:16:03.630 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:16:03.630 | 70.00th=[10290], 80.00th=[10814], 90.00th=[12780], 95.00th=[14222], 00:16:03.630 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16319], 99.95th=[18220], 00:16:03.630 | 99.99th=[21103] 00:16:03.630 bw ( KiB/s): min=24576, max=24576, per=33.78%, avg=24576.00, stdev= 0.00, samples=2 00:16:03.630 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:16:03.630 lat (msec) : 4=0.21%, 10=39.03%, 20=60.75%, 50=0.01% 00:16:03.630 cpu : usr=4.29%, sys=6.39%, ctx=606, majf=0, minf=1 00:16:03.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:03.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.630 issued rwts: total=6144,6183,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.630 job2: (groupid=0, jobs=1): err= 0: pid=993297: Mon Jul 15 23:41:52 2024 00:16:03.630 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:16:03.630 slat (nsec): min=1586, max=18172k, avg=123336.78, stdev=938381.09 00:16:03.630 clat (usec): min=5656, max=55685, avg=15534.66, stdev=5387.99 00:16:03.630 lat (usec): min=5663, max=55690, avg=15657.99, stdev=5479.69 00:16:03.630 clat percentiles (usec): 00:16:03.630 | 1.00th=[ 8979], 5.00th=[11863], 10.00th=[12125], 20.00th=[12256], 00:16:03.630 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13173], 60.00th=[13829], 00:16:03.630 | 70.00th=[15795], 80.00th=[19006], 90.00th=[22414], 95.00th=[25035], 00:16:03.630 | 99.00th=[37487], 99.50th=[44827], 99.90th=[55837], 99.95th=[55837], 00:16:03.630 | 99.99th=[55837] 00:16:03.630 write: IOPS=4025, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1015msec); 0 zone resets 00:16:03.630 slat (usec): min=2, max=16517, avg=129.05, stdev=825.52 00:16:03.630 clat (usec): min=1229, max=91097, avg=17832.43, stdev=13377.34 00:16:03.630 lat (usec): min=1242, max=91104, avg=17961.47, stdev=13453.25 00:16:03.630 clat percentiles (usec): 00:16:03.630 | 1.00th=[ 4146], 5.00th=[ 7767], 10.00th=[ 7898], 20.00th=[ 9634], 00:16:03.630 | 30.00th=[11338], 40.00th=[12780], 50.00th=[14615], 60.00th=[16057], 00:16:03.630 | 70.00th=[18744], 80.00th=[21365], 90.00th=[23462], 95.00th=[53740], 00:16:03.630 | 99.00th=[77071], 99.50th=[81265], 99.90th=[90702], 99.95th=[90702], 00:16:03.630 | 99.99th=[90702] 00:16:03.630 bw ( KiB/s): min=15280, max=16384, per=21.76%, avg=15832.00, stdev=780.65, samples=2 00:16:03.630 iops : min= 3820, max= 4096, avg=3958.00, stdev=195.16, samples=2 00:16:03.630 lat (msec) : 2=0.04%, 4=0.23%, 10=11.66%, 20=66.73%, 50=18.04% 00:16:03.630 lat (msec) : 100=3.30% 00:16:03.630 cpu : usr=3.35%, sys=4.93%, ctx=313, majf=0, minf=1 00:16:03.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:03.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.630 issued rwts: total=3584,4086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.630 job3: (groupid=0, jobs=1): err= 0: pid=993298: Mon Jul 15 23:41:52 2024 00:16:03.630 read: IOPS=5565, BW=21.7MiB/s (22.8MB/s)(21.9MiB/1009msec) 00:16:03.630 slat (nsec): min=1417, max=10131k, avg=99150.83, stdev=708167.22 00:16:03.630 clat (usec): min=3392, max=21492, avg=12207.94, stdev=2713.93 00:16:03.630 lat (usec): min=3838, max=26692, avg=12307.09, stdev=2766.91 00:16:03.630 clat percentiles (usec): 00:16:03.630 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10814], 00:16:03.630 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:16:03.630 | 70.00th=[11994], 80.00th=[13960], 90.00th=[16450], 95.00th=[18220], 00:16:03.630 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21103], 99.95th=[21103], 00:16:03.630 | 99.99th=[21365] 00:16:03.630 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:16:03.630 slat (usec): min=2, max=17547, avg=74.12, stdev=442.47 00:16:03.630 clat (usec): min=1688, max=23953, avg=10538.35, stdev=3177.47 00:16:03.630 lat (usec): min=1701, max=23965, avg=10612.47, stdev=3190.27 00:16:03.630 clat percentiles (usec): 00:16:03.630 | 1.00th=[ 3294], 5.00th=[ 4817], 10.00th=[ 6587], 20.00th=[ 8160], 00:16:03.630 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11338], 60.00th=[11469], 00:16:03.630 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11994], 95.00th=[14746], 00:16:03.630 | 99.00th=[23462], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:16:03.630 | 99.99th=[23987] 00:16:03.630 bw ( KiB/s): min=22000, max=23056, per=30.97%, avg=22528.00, stdev=746.70, samples=2 00:16:03.630 iops : min= 5500, max= 5764, avg=5632.00, stdev=186.68, samples=2 00:16:03.630 lat (msec) : 2=0.10%, 4=1.37%, 10=17.67%, 20=79.16%, 50=1.70% 00:16:03.630 cpu : usr=4.17%, sys=5.75%, ctx=657, majf=0, minf=1 00:16:03.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:03.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.630 issued rwts: total=5616,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.630 00:16:03.630 Run status group 0 (all jobs): 00:16:03.630 READ: bw=67.1MiB/s (70.4MB/s), 8301KiB/s-23.9MiB/s (8500kB/s-25.1MB/s), io=68.1MiB (71.4MB), run=1003-1015msec 00:16:03.630 WRITE: bw=71.0MiB/s (74.5MB/s), 9.90MiB/s-24.1MiB/s (10.4MB/s-25.2MB/s), io=72.1MiB (75.6MB), run=1003-1015msec 00:16:03.630 00:16:03.630 Disk stats (read/write): 00:16:03.630 nvme0n1: ios=1563/1783, merge=0/0, ticks=35970/61624, in_queue=97594, util=86.26% 00:16:03.630 nvme0n2: ios=5169/5342, merge=0/0, ticks=26526/25975, in_queue=52501, util=89.95% 00:16:03.630 nvme0n3: ios=3381/3584, merge=0/0, ticks=51201/51571, in_queue=102772, util=94.60% 00:16:03.630 nvme0n4: ios=4630/4919, merge=0/0, ticks=55578/48949, in_queue=104527, util=94.35% 00:16:03.630 23:41:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:03.630 23:41:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=993529 00:16:03.630 23:41:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:03.630 23:41:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:03.630 [global] 00:16:03.630 thread=1 00:16:03.630 invalidate=1 00:16:03.630 rw=read 00:16:03.630 time_based=1 00:16:03.630 runtime=10 00:16:03.630 ioengine=libaio 00:16:03.630 direct=1 00:16:03.630 bs=4096 00:16:03.630 iodepth=1 00:16:03.630 norandommap=1 00:16:03.630 numjobs=1 00:16:03.630 00:16:03.630 [job0] 00:16:03.630 filename=/dev/nvme0n1 00:16:03.630 [job1] 00:16:03.630 filename=/dev/nvme0n2 00:16:03.630 [job2] 00:16:03.630 filename=/dev/nvme0n3 00:16:03.630 [job3] 00:16:03.630 filename=/dev/nvme0n4 00:16:03.630 Could not set queue depth (nvme0n1) 00:16:03.630 Could not set queue depth (nvme0n2) 00:16:03.630 Could not set queue depth (nvme0n3) 00:16:03.630 Could not set queue depth (nvme0n4) 00:16:03.888 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.888 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.888 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.888 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:03.888 fio-3.35 00:16:03.888 Starting 4 threads 00:16:07.163 23:41:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:07.163 23:41:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:07.163 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=4997120, buflen=4096 00:16:07.163 fio: pid=993675, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.163 23:41:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.163 23:41:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:07.163 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=290816, buflen=4096 00:16:07.163 fio: pid=993674, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.163 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2871296, buflen=4096 00:16:07.163 fio: pid=993671, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.163 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.163 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:07.421 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=39854080, buflen=4096 00:16:07.421 fio: pid=993672, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:07.421 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.421 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:07.421 00:16:07.421 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=993671: Mon Jul 15 23:41:56 2024 00:16:07.421 read: IOPS=226, BW=904KiB/s (926kB/s)(2804KiB/3102msec) 00:16:07.421 slat (usec): min=5, max=29780, avg=50.20, stdev=1123.69 00:16:07.421 clat (usec): min=345, max=44966, avg=4343.40, stdev=12075.04 00:16:07.421 lat (usec): min=352, max=71035, avg=4393.66, stdev=12258.64 00:16:07.421 clat percentiles (usec): 00:16:07.421 | 1.00th=[ 363], 5.00th=[ 371], 10.00th=[ 375], 20.00th=[ 379], 00:16:07.421 | 30.00th=[ 383], 40.00th=[ 388], 50.00th=[ 388], 60.00th=[ 392], 00:16:07.421 | 70.00th=[ 396], 80.00th=[ 400], 90.00th=[ 578], 95.00th=[41157], 00:16:07.421 | 99.00th=[41681], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:16:07.421 | 99.99th=[44827] 00:16:07.421 bw ( KiB/s): min= 94, max= 5104, per=6.53%, avg=931.67, stdev=2044.02, samples=6 00:16:07.421 iops : min= 23, max= 1276, avg=232.83, stdev=511.05, samples=6 00:16:07.421 lat (usec) : 500=89.60%, 750=0.57% 00:16:07.421 lat (msec) : 50=9.69% 00:16:07.421 cpu : usr=0.06%, sys=0.19%, ctx=704, majf=0, minf=1 00:16:07.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.421 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=993672: Mon Jul 15 23:41:56 2024 00:16:07.421 read: IOPS=2959, BW=11.6MiB/s (12.1MB/s)(38.0MiB/3288msec) 00:16:07.421 slat (usec): min=5, max=15680, avg=14.31, stdev=294.78 00:16:07.421 clat (usec): min=230, max=20724, avg=321.29, stdev=214.70 00:16:07.421 lat (usec): min=238, max=20731, avg=335.60, stdev=367.30 00:16:07.421 clat percentiles (usec): 00:16:07.421 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:16:07.421 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:16:07.421 | 70.00th=[ 306], 80.00th=[ 367], 90.00th=[ 424], 95.00th=[ 449], 00:16:07.421 | 99.00th=[ 482], 99.50th=[ 506], 99.90th=[ 644], 99.95th=[ 668], 00:16:07.421 | 99.99th=[20841] 00:16:07.421 bw ( KiB/s): min=10728, max=12664, per=83.71%, avg=11937.83, stdev=663.88, samples=6 00:16:07.421 iops : min= 2682, max= 3166, avg=2984.33, stdev=165.94, samples=6 00:16:07.421 lat (usec) : 250=0.36%, 500=98.95%, 750=0.66%, 1000=0.01% 00:16:07.421 lat (msec) : 50=0.01% 00:16:07.421 cpu : usr=0.91%, sys=2.56%, ctx=9739, majf=0, minf=1 00:16:07.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 issued rwts: total=9731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.421 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=993674: Mon Jul 15 23:41:56 2024 00:16:07.421 read: IOPS=24, BW=97.4KiB/s (99.8kB/s)(284KiB/2915msec) 00:16:07.421 slat (usec): min=8, max=9753, avg=157.84, stdev=1146.82 00:16:07.421 clat (usec): min=687, max=42059, avg=40582.04, stdev=4817.35 00:16:07.421 lat (usec): min=716, max=50869, avg=40741.79, stdev=4968.08 00:16:07.421 clat percentiles (usec): 00:16:07.421 | 1.00th=[ 685], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:07.421 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:07.421 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:16:07.421 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:07.421 | 99.99th=[42206] 00:16:07.421 bw ( KiB/s): min= 96, max= 104, per=0.68%, avg=97.60, stdev= 3.58, samples=5 00:16:07.421 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:16:07.421 lat (usec) : 750=1.39% 00:16:07.421 lat (msec) : 50=97.22% 00:16:07.421 cpu : usr=0.10%, sys=0.00%, ctx=77, majf=0, minf=1 00:16:07.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.421 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=993675: Mon Jul 15 23:41:56 2024 00:16:07.421 read: IOPS=446, BW=1783KiB/s (1826kB/s)(4880KiB/2737msec) 00:16:07.421 slat (nsec): min=5989, max=33153, avg=8510.96, stdev=3040.39 00:16:07.421 clat (usec): min=289, max=41067, avg=2215.66, stdev=8500.28 00:16:07.421 lat (usec): min=297, max=41081, avg=2224.17, stdev=8502.56 00:16:07.421 clat percentiles (usec): 00:16:07.421 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 318], 00:16:07.421 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 351], 00:16:07.421 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 416], 95.00th=[ 486], 00:16:07.421 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:07.421 | 99.99th=[41157] 00:16:07.421 bw ( KiB/s): min= 96, max= 9304, per=13.62%, avg=1942.40, stdev=4115.26, samples=5 00:16:07.421 iops : min= 24, max= 2326, avg=485.60, stdev=1028.82, samples=5 00:16:07.421 lat (usec) : 500=95.09%, 750=0.25% 00:16:07.421 lat (msec) : 50=4.59% 00:16:07.421 cpu : usr=0.29%, sys=0.58%, ctx=1221, majf=0, minf=2 00:16:07.421 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.421 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.421 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.421 00:16:07.421 Run status group 0 (all jobs): 00:16:07.421 READ: bw=13.9MiB/s (14.6MB/s), 97.4KiB/s-11.6MiB/s (99.8kB/s-12.1MB/s), io=45.8MiB (48.0MB), run=2737-3288msec 00:16:07.421 00:16:07.421 Disk stats (read/write): 00:16:07.421 nvme0n1: ios=702/0, merge=0/0, ticks=3052/0, in_queue=3052, util=94.39% 00:16:07.421 nvme0n2: ios=9272/0, merge=0/0, ticks=3792/0, in_queue=3792, util=98.45% 00:16:07.421 nvme0n3: ios=113/0, merge=0/0, ticks=3930/0, in_queue=3930, util=99.73% 00:16:07.421 nvme0n4: ios=1217/0, merge=0/0, ticks=2566/0, in_queue=2566, util=96.44% 00:16:07.678 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.678 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:07.678 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.678 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:07.935 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:07.935 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:08.191 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:08.191 23:41:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:08.191 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:08.191 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 993529 00:16:08.191 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:08.191 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:08.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1213 -- # local i=0 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1225 -- # return 0 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:08.449 nvmf hotplug test: fio failed as expected 00:16:08.449 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:08.707 rmmod nvme_tcp 00:16:08.707 rmmod nvme_fabrics 00:16:08.707 rmmod nvme_keyring 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 990741 ']' 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 990741 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@942 -- # '[' -z 990741 ']' 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # kill -0 990741 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # uname 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 990741 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 990741' 00:16:08.707 killing process with pid 990741 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@961 -- # kill 990741 00:16:08.707 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # wait 990741 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:08.966 23:41:57 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:10.923 23:41:59 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:10.923 00:16:10.923 real 0m26.333s 00:16:10.923 user 1m46.914s 00:16:10.923 sys 0m7.595s 00:16:10.923 23:41:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:10.923 23:41:59 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.923 ************************************ 00:16:10.923 END TEST nvmf_fio_target 00:16:10.923 ************************************ 00:16:11.181 23:41:59 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:16:11.181 23:41:59 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:11.181 23:41:59 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:16:11.181 23:41:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:11.181 23:41:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.181 ************************************ 00:16:11.181 START TEST nvmf_bdevio 00:16:11.181 ************************************ 00:16:11.181 23:41:59 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:11.181 * Looking for test storage... 00:16:11.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:11.181 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:11.182 23:42:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:16.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:16.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:16.445 Found net devices under 0000:86:00.0: cvl_0_0 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:16.445 Found net devices under 0000:86:00.1: cvl_0_1 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:16.445 23:42:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:16.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:16:16.445 00:16:16.445 --- 10.0.0.2 ping statistics --- 00:16:16.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.445 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:16:16.445 00:16:16.445 --- 10.0.0.1 ping statistics --- 00:16:16.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.445 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=998023 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 998023 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@823 -- # '[' -z 998023 ']' 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.445 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:16.446 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.446 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:16.446 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:16.446 [2024-07-15 23:42:05.152871] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:16:16.446 [2024-07-15 23:42:05.152915] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.446 [2024-07-15 23:42:05.211668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.446 [2024-07-15 23:42:05.290606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.446 [2024-07-15 23:42:05.290640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.446 [2024-07-15 23:42:05.290647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:16.446 [2024-07-15 23:42:05.290653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:16.446 [2024-07-15 23:42:05.290658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.446 [2024-07-15 23:42:05.290783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:16.446 [2024-07-15 23:42:05.290891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:16.446 [2024-07-15 23:42:05.290997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:16.446 [2024-07-15 23:42:05.290998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:17.379 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:17.379 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # return 0 00:16:17.379 23:42:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.379 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.379 23:42:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.379 [2024-07-15 23:42:06.031257] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.379 Malloc0 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.379 23:42:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:17.380 [2024-07-15 23:42:06.078666] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:17.380 { 00:16:17.380 "params": { 00:16:17.380 "name": "Nvme$subsystem", 00:16:17.380 "trtype": "$TEST_TRANSPORT", 00:16:17.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.380 "adrfam": "ipv4", 00:16:17.380 "trsvcid": "$NVMF_PORT", 00:16:17.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.380 "hdgst": ${hdgst:-false}, 00:16:17.380 "ddgst": ${ddgst:-false} 00:16:17.380 }, 00:16:17.380 "method": "bdev_nvme_attach_controller" 00:16:17.380 } 00:16:17.380 EOF 00:16:17.380 )") 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:17.380 23:42:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:17.380 "params": { 00:16:17.380 "name": "Nvme1", 00:16:17.380 "trtype": "tcp", 00:16:17.380 "traddr": "10.0.0.2", 00:16:17.380 "adrfam": "ipv4", 00:16:17.380 "trsvcid": "4420", 00:16:17.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.380 "hdgst": false, 00:16:17.380 "ddgst": false 00:16:17.380 }, 00:16:17.380 "method": "bdev_nvme_attach_controller" 00:16:17.380 }' 00:16:17.380 [2024-07-15 23:42:06.127952] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:16:17.380 [2024-07-15 23:42:06.127999] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid998272 ] 00:16:17.380 [2024-07-15 23:42:06.182291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.380 [2024-07-15 23:42:06.257994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.380 [2024-07-15 23:42:06.258009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.380 [2024-07-15 23:42:06.258011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.638 I/O targets: 00:16:17.638 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:17.638 00:16:17.638 00:16:17.638 CUnit - A unit testing framework for C - Version 2.1-3 00:16:17.638 http://cunit.sourceforge.net/ 00:16:17.638 00:16:17.638 00:16:17.638 Suite: bdevio tests on: Nvme1n1 00:16:17.638 Test: blockdev write read block ...passed 00:16:17.896 Test: blockdev write zeroes read block ...passed 00:16:17.896 Test: blockdev write zeroes read no split ...passed 00:16:17.896 Test: blockdev write zeroes read split ...passed 00:16:17.896 Test: blockdev write zeroes read split partial ...passed 00:16:17.896 Test: blockdev reset ...[2024-07-15 23:42:06.745051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.896 [2024-07-15 23:42:06.745112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f96d0 (9): Bad file descriptor 00:16:17.896 [2024-07-15 23:42:06.756817] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:17.896 passed 00:16:17.896 Test: blockdev write read 8 blocks ...passed 00:16:17.896 Test: blockdev write read size > 128k ...passed 00:16:17.896 Test: blockdev write read invalid size ...passed 00:16:17.896 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:17.896 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:17.896 Test: blockdev write read max offset ...passed 00:16:18.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:18.154 Test: blockdev writev readv 8 blocks ...passed 00:16:18.154 Test: blockdev writev readv 30 x 1block ...passed 00:16:18.154 Test: blockdev writev readv block ...passed 00:16:18.154 Test: blockdev writev readv size > 128k ...passed 00:16:18.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:18.154 Test: blockdev comparev and writev ...[2024-07-15 23:42:07.015038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.015066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.015080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.015088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.015394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.015405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.015416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.015424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.015698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.015709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.015720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.015727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.016026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.016040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.016052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.154 [2024-07-15 23:42:07.016059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.154 passed 00:16:18.154 Test: blockdev nvme passthru rw ...passed 00:16:18.154 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:42:07.098683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.154 [2024-07-15 23:42:07.098699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.098861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.154 [2024-07-15 23:42:07.098871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.099023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.154 [2024-07-15 23:42:07.099034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:18.154 [2024-07-15 23:42:07.099193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.154 [2024-07-15 23:42:07.099203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:18.154 passed 00:16:18.154 Test: blockdev nvme admin passthru ...passed 00:16:18.412 Test: blockdev copy ...passed 00:16:18.412 00:16:18.412 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.412 suites 1 1 n/a 0 0 00:16:18.412 tests 23 23 23 0 0 00:16:18.412 asserts 152 152 152 0 n/a 00:16:18.412 00:16:18.412 Elapsed time = 1.236 seconds 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:18.412 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:18.412 rmmod nvme_tcp 00:16:18.412 rmmod nvme_fabrics 00:16:18.412 rmmod nvme_keyring 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 998023 ']' 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 998023 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@942 -- # '[' -z 998023 ']' 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # kill -0 998023 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # uname 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 998023 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@960 -- # echo 'killing process with pid 998023' 00:16:18.670 killing process with pid 998023 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@961 -- # kill 998023 00:16:18.670 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # wait 998023 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:18.927 23:42:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.828 23:42:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:20.828 00:16:20.828 real 0m9.785s 00:16:20.828 user 0m13.165s 00:16:20.828 sys 0m4.426s 00:16:20.828 23:42:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1118 -- # xtrace_disable 00:16:20.828 23:42:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:20.828 ************************************ 00:16:20.828 END TEST nvmf_bdevio 00:16:20.828 ************************************ 00:16:20.828 23:42:09 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:16:20.828 23:42:09 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:20.828 23:42:09 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:16:20.828 23:42:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:16:20.828 23:42:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:20.828 ************************************ 00:16:20.828 START TEST nvmf_auth_target 00:16:20.828 ************************************ 00:16:20.828 23:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:21.086 * Looking for test storage... 00:16:21.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:21.086 23:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:26.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:26.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.362 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:26.363 Found net devices under 0000:86:00.0: cvl_0_0 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:26.363 Found net devices under 0000:86:00.1: cvl_0_1 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.363 23:42:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:26.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:16:26.363 00:16:26.363 --- 10.0.0.2 ping statistics --- 00:16:26.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.363 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:16:26.363 00:16:26.363 --- 10.0.0.1 ping statistics --- 00:16:26.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.363 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1002189 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1002189 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1002189 ']' 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:26.363 23:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1002431 00:16:27.301 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=beeb99e73e617003e5f89c706381fa6e8d6b8c44bbec97bb 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CVF 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key beeb99e73e617003e5f89c706381fa6e8d6b8c44bbec97bb 0 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 beeb99e73e617003e5f89c706381fa6e8d6b8c44bbec97bb 0 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=beeb99e73e617003e5f89c706381fa6e8d6b8c44bbec97bb 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CVF 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CVF 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.CVF 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=02c62309b2b911193ea638d62b4285626deff0237ee96e15a33d9ef79ada6e49 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hbh 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 02c62309b2b911193ea638d62b4285626deff0237ee96e15a33d9ef79ada6e49 3 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 02c62309b2b911193ea638d62b4285626deff0237ee96e15a33d9ef79ada6e49 3 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=02c62309b2b911193ea638d62b4285626deff0237ee96e15a33d9ef79ada6e49 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hbh 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hbh 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.hbh 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4f2a6b23a919254583cf8bfa48f560c9 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Rr4 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4f2a6b23a919254583cf8bfa48f560c9 1 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4f2a6b23a919254583cf8bfa48f560c9 1 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4f2a6b23a919254583cf8bfa48f560c9 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Rr4 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Rr4 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Rr4 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=11568e6933e17d97a042b2ef27377b662f83aa612cf3319f 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Qwf 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 11568e6933e17d97a042b2ef27377b662f83aa612cf3319f 2 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 11568e6933e17d97a042b2ef27377b662f83aa612cf3319f 2 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=11568e6933e17d97a042b2ef27377b662f83aa612cf3319f 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:27.302 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Qwf 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Qwf 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Qwf 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1f507cb8642590dc8a14d3d433efe9306483f13f6b5ebb80 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.yQ8 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1f507cb8642590dc8a14d3d433efe9306483f13f6b5ebb80 2 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1f507cb8642590dc8a14d3d433efe9306483f13f6b5ebb80 2 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1f507cb8642590dc8a14d3d433efe9306483f13f6b5ebb80 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.yQ8 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.yQ8 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.yQ8 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0980be71288246cfc7053ebdd443000f 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.O7f 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0980be71288246cfc7053ebdd443000f 1 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0980be71288246cfc7053ebdd443000f 1 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0980be71288246cfc7053ebdd443000f 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.O7f 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.O7f 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.O7f 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b3d9186ef8a4de7dc308ba534656d7aa30b9ef0e38e76fba4b13571812baa35 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Tp7 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b3d9186ef8a4de7dc308ba534656d7aa30b9ef0e38e76fba4b13571812baa35 3 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b3d9186ef8a4de7dc308ba534656d7aa30b9ef0e38e76fba4b13571812baa35 3 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b3d9186ef8a4de7dc308ba534656d7aa30b9ef0e38e76fba4b13571812baa35 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Tp7 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Tp7 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Tp7 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1002189 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1002189 ']' 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:27.562 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1002431 /var/tmp/host.sock 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1002431 ']' 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:27.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:16:27.822 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CVF 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CVF 00:16:28.081 23:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CVF 00:16:28.081 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.hbh ]] 00:16:28.081 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hbh 00:16:28.081 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.081 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.081 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.081 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hbh 00:16:28.081 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.hbh 00:16:28.341 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:28.341 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Rr4 00:16:28.341 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.341 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.341 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.341 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Rr4 00:16:28.341 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Rr4 00:16:28.600 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Qwf ]] 00:16:28.600 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qwf 00:16:28.600 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.600 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.600 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.600 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qwf 00:16:28.600 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Qwf 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.yQ8 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.yQ8 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.yQ8 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.O7f ]] 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O7f 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O7f 00:16:28.862 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.O7f 00:16:29.122 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:29.122 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Tp7 00:16:29.122 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.122 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.122 23:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.122 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Tp7 00:16:29.122 23:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Tp7 00:16:29.381 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:29.381 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:29.381 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.382 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.641 00:16:29.641 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.641 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.641 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.901 { 00:16:29.901 "cntlid": 1, 00:16:29.901 "qid": 0, 00:16:29.901 "state": "enabled", 00:16:29.901 "thread": "nvmf_tgt_poll_group_000", 00:16:29.901 "listen_address": { 00:16:29.901 "trtype": "TCP", 00:16:29.901 "adrfam": "IPv4", 00:16:29.901 "traddr": "10.0.0.2", 00:16:29.901 "trsvcid": "4420" 00:16:29.901 }, 00:16:29.901 "peer_address": { 00:16:29.901 "trtype": "TCP", 00:16:29.901 "adrfam": "IPv4", 00:16:29.901 "traddr": "10.0.0.1", 00:16:29.901 "trsvcid": "48316" 00:16:29.901 }, 00:16:29.901 "auth": { 00:16:29.901 "state": "completed", 00:16:29.901 "digest": "sha256", 00:16:29.901 "dhgroup": "null" 00:16:29.901 } 00:16:29.901 } 00:16:29.901 ]' 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.901 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.160 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.160 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.160 23:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.160 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.734 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.993 23:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.252 00:16:31.252 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.252 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.252 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.511 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.511 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.511 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.512 { 00:16:31.512 "cntlid": 3, 00:16:31.512 "qid": 0, 00:16:31.512 "state": "enabled", 00:16:31.512 "thread": "nvmf_tgt_poll_group_000", 00:16:31.512 "listen_address": { 00:16:31.512 "trtype": "TCP", 00:16:31.512 "adrfam": "IPv4", 00:16:31.512 "traddr": "10.0.0.2", 00:16:31.512 "trsvcid": "4420" 00:16:31.512 }, 00:16:31.512 "peer_address": { 00:16:31.512 "trtype": "TCP", 00:16:31.512 "adrfam": "IPv4", 00:16:31.512 "traddr": "10.0.0.1", 00:16:31.512 "trsvcid": "48344" 00:16:31.512 }, 00:16:31.512 "auth": { 00:16:31.512 "state": "completed", 00:16:31.512 "digest": "sha256", 00:16:31.512 "dhgroup": "null" 00:16:31.512 } 00:16:31.512 } 00:16:31.512 ]' 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.512 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.771 23:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.338 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.598 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.598 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.858 { 00:16:32.858 "cntlid": 5, 00:16:32.858 "qid": 0, 00:16:32.858 "state": "enabled", 00:16:32.858 "thread": "nvmf_tgt_poll_group_000", 00:16:32.858 "listen_address": { 00:16:32.858 "trtype": "TCP", 00:16:32.858 "adrfam": "IPv4", 00:16:32.858 "traddr": "10.0.0.2", 00:16:32.858 "trsvcid": "4420" 00:16:32.858 }, 00:16:32.858 "peer_address": { 00:16:32.858 "trtype": "TCP", 00:16:32.858 "adrfam": "IPv4", 00:16:32.858 "traddr": "10.0.0.1", 00:16:32.858 "trsvcid": "48380" 00:16:32.858 }, 00:16:32.858 "auth": { 00:16:32.858 "state": "completed", 00:16:32.858 "digest": "sha256", 00:16:32.858 "dhgroup": "null" 00:16:32.858 } 00:16:32.858 } 00:16:32.858 ]' 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.858 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.119 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:33.119 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.119 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.119 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.120 23:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.120 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.731 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.991 23:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.251 00:16:34.251 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.251 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.251 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.510 { 00:16:34.510 "cntlid": 7, 00:16:34.510 "qid": 0, 00:16:34.510 "state": "enabled", 00:16:34.510 "thread": "nvmf_tgt_poll_group_000", 00:16:34.510 "listen_address": { 00:16:34.510 "trtype": "TCP", 00:16:34.510 "adrfam": "IPv4", 00:16:34.510 "traddr": "10.0.0.2", 00:16:34.510 "trsvcid": "4420" 00:16:34.510 }, 00:16:34.510 "peer_address": { 00:16:34.510 "trtype": "TCP", 00:16:34.510 "adrfam": "IPv4", 00:16:34.510 "traddr": "10.0.0.1", 00:16:34.510 "trsvcid": "48410" 00:16:34.510 }, 00:16:34.510 "auth": { 00:16:34.510 "state": "completed", 00:16:34.510 "digest": "sha256", 00:16:34.510 "dhgroup": "null" 00:16:34.510 } 00:16:34.510 } 00:16:34.510 ]' 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.510 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.768 23:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.338 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.598 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.858 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.858 { 00:16:35.858 "cntlid": 9, 00:16:35.858 "qid": 0, 00:16:35.858 "state": "enabled", 00:16:35.858 "thread": "nvmf_tgt_poll_group_000", 00:16:35.858 "listen_address": { 00:16:35.858 "trtype": "TCP", 00:16:35.858 "adrfam": "IPv4", 00:16:35.858 "traddr": "10.0.0.2", 00:16:35.858 "trsvcid": "4420" 00:16:35.858 }, 00:16:35.858 "peer_address": { 00:16:35.858 "trtype": "TCP", 00:16:35.858 "adrfam": "IPv4", 00:16:35.858 "traddr": "10.0.0.1", 00:16:35.858 "trsvcid": "48428" 00:16:35.858 }, 00:16:35.858 "auth": { 00:16:35.858 "state": "completed", 00:16:35.858 "digest": "sha256", 00:16:35.858 "dhgroup": "ffdhe2048" 00:16:35.858 } 00:16:35.858 } 00:16:35.858 ]' 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.858 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.117 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.117 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.117 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.117 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.117 23:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.117 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:16:36.685 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.944 23:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.203 00:16:37.203 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.203 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.203 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.462 { 00:16:37.462 "cntlid": 11, 00:16:37.462 "qid": 0, 00:16:37.462 "state": "enabled", 00:16:37.462 "thread": "nvmf_tgt_poll_group_000", 00:16:37.462 "listen_address": { 00:16:37.462 "trtype": "TCP", 00:16:37.462 "adrfam": "IPv4", 00:16:37.462 "traddr": "10.0.0.2", 00:16:37.462 "trsvcid": "4420" 00:16:37.462 }, 00:16:37.462 "peer_address": { 00:16:37.462 "trtype": "TCP", 00:16:37.462 "adrfam": "IPv4", 00:16:37.462 "traddr": "10.0.0.1", 00:16:37.462 "trsvcid": "48460" 00:16:37.462 }, 00:16:37.462 "auth": { 00:16:37.462 "state": "completed", 00:16:37.462 "digest": "sha256", 00:16:37.462 "dhgroup": "ffdhe2048" 00:16:37.462 } 00:16:37.462 } 00:16:37.462 ]' 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.462 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.463 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.463 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.463 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.463 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.721 23:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.289 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.548 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.808 00:16:38.808 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.808 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.808 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.067 { 00:16:39.067 "cntlid": 13, 00:16:39.067 "qid": 0, 00:16:39.067 "state": "enabled", 00:16:39.067 "thread": "nvmf_tgt_poll_group_000", 00:16:39.067 "listen_address": { 00:16:39.067 "trtype": "TCP", 00:16:39.067 "adrfam": "IPv4", 00:16:39.067 "traddr": "10.0.0.2", 00:16:39.067 "trsvcid": "4420" 00:16:39.067 }, 00:16:39.067 "peer_address": { 00:16:39.067 "trtype": "TCP", 00:16:39.067 "adrfam": "IPv4", 00:16:39.067 "traddr": "10.0.0.1", 00:16:39.067 "trsvcid": "40358" 00:16:39.067 }, 00:16:39.067 "auth": { 00:16:39.067 "state": "completed", 00:16:39.067 "digest": "sha256", 00:16:39.067 "dhgroup": "ffdhe2048" 00:16:39.067 } 00:16:39.067 } 00:16:39.067 ]' 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.067 23:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.327 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:39.895 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.154 23:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.413 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.413 { 00:16:40.413 "cntlid": 15, 00:16:40.413 "qid": 0, 00:16:40.413 "state": "enabled", 00:16:40.413 "thread": "nvmf_tgt_poll_group_000", 00:16:40.413 "listen_address": { 00:16:40.413 "trtype": "TCP", 00:16:40.413 "adrfam": "IPv4", 00:16:40.413 "traddr": "10.0.0.2", 00:16:40.413 "trsvcid": "4420" 00:16:40.413 }, 00:16:40.413 "peer_address": { 00:16:40.413 "trtype": "TCP", 00:16:40.413 "adrfam": "IPv4", 00:16:40.413 "traddr": "10.0.0.1", 00:16:40.413 "trsvcid": "40390" 00:16:40.413 }, 00:16:40.413 "auth": { 00:16:40.413 "state": "completed", 00:16:40.413 "digest": "sha256", 00:16:40.413 "dhgroup": "ffdhe2048" 00:16:40.413 } 00:16:40.413 } 00:16:40.413 ]' 00:16:40.413 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.671 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.671 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.671 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.671 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.671 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.671 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.671 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.929 23:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.495 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:41.753 00:16:41.753 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.753 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.753 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.011 { 00:16:42.011 "cntlid": 17, 00:16:42.011 "qid": 0, 00:16:42.011 "state": "enabled", 00:16:42.011 "thread": "nvmf_tgt_poll_group_000", 00:16:42.011 "listen_address": { 00:16:42.011 "trtype": "TCP", 00:16:42.011 "adrfam": "IPv4", 00:16:42.011 "traddr": "10.0.0.2", 00:16:42.011 "trsvcid": "4420" 00:16:42.011 }, 00:16:42.011 "peer_address": { 00:16:42.011 "trtype": "TCP", 00:16:42.011 "adrfam": "IPv4", 00:16:42.011 "traddr": "10.0.0.1", 00:16:42.011 "trsvcid": "40428" 00:16:42.011 }, 00:16:42.011 "auth": { 00:16:42.011 "state": "completed", 00:16:42.011 "digest": "sha256", 00:16:42.011 "dhgroup": "ffdhe3072" 00:16:42.011 } 00:16:42.011 } 00:16:42.011 ]' 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.011 23:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.270 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.836 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.095 23:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.353 00:16:43.353 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.353 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.353 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.611 { 00:16:43.611 "cntlid": 19, 00:16:43.611 "qid": 0, 00:16:43.611 "state": "enabled", 00:16:43.611 "thread": "nvmf_tgt_poll_group_000", 00:16:43.611 "listen_address": { 00:16:43.611 "trtype": "TCP", 00:16:43.611 "adrfam": "IPv4", 00:16:43.611 "traddr": "10.0.0.2", 00:16:43.611 "trsvcid": "4420" 00:16:43.611 }, 00:16:43.611 "peer_address": { 00:16:43.611 "trtype": "TCP", 00:16:43.611 "adrfam": "IPv4", 00:16:43.611 "traddr": "10.0.0.1", 00:16:43.611 "trsvcid": "40468" 00:16:43.611 }, 00:16:43.611 "auth": { 00:16:43.611 "state": "completed", 00:16:43.611 "digest": "sha256", 00:16:43.611 "dhgroup": "ffdhe3072" 00:16:43.611 } 00:16:43.611 } 00:16:43.611 ]' 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.611 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.870 23:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.436 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.694 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.952 00:16:44.952 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.952 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.952 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.212 { 00:16:45.212 "cntlid": 21, 00:16:45.212 "qid": 0, 00:16:45.212 "state": "enabled", 00:16:45.212 "thread": "nvmf_tgt_poll_group_000", 00:16:45.212 "listen_address": { 00:16:45.212 "trtype": "TCP", 00:16:45.212 "adrfam": "IPv4", 00:16:45.212 "traddr": "10.0.0.2", 00:16:45.212 "trsvcid": "4420" 00:16:45.212 }, 00:16:45.212 "peer_address": { 00:16:45.212 "trtype": "TCP", 00:16:45.212 "adrfam": "IPv4", 00:16:45.212 "traddr": "10.0.0.1", 00:16:45.212 "trsvcid": "40500" 00:16:45.212 }, 00:16:45.212 "auth": { 00:16:45.212 "state": "completed", 00:16:45.212 "digest": "sha256", 00:16:45.212 "dhgroup": "ffdhe3072" 00:16:45.212 } 00:16:45.212 } 00:16:45.212 ]' 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.212 23:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.212 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:45.212 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.212 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.212 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.212 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.471 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.039 23:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:46.039 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:46.039 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.039 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:46.039 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.039 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.340 00:16:46.340 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.340 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.340 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.599 { 00:16:46.599 "cntlid": 23, 00:16:46.599 "qid": 0, 00:16:46.599 "state": "enabled", 00:16:46.599 "thread": "nvmf_tgt_poll_group_000", 00:16:46.599 "listen_address": { 00:16:46.599 "trtype": "TCP", 00:16:46.599 "adrfam": "IPv4", 00:16:46.599 "traddr": "10.0.0.2", 00:16:46.599 "trsvcid": "4420" 00:16:46.599 }, 00:16:46.599 "peer_address": { 00:16:46.599 "trtype": "TCP", 00:16:46.599 "adrfam": "IPv4", 00:16:46.599 "traddr": "10.0.0.1", 00:16:46.599 "trsvcid": "40530" 00:16:46.599 }, 00:16:46.599 "auth": { 00:16:46.599 "state": "completed", 00:16:46.599 "digest": "sha256", 00:16:46.599 "dhgroup": "ffdhe3072" 00:16:46.599 } 00:16:46.599 } 00:16:46.599 ]' 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.599 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.858 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.858 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.858 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.858 23:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.426 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.685 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.944 00:16:47.944 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.944 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.944 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.204 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.204 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.204 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:48.204 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.204 23:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:48.204 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.204 { 00:16:48.204 "cntlid": 25, 00:16:48.204 "qid": 0, 00:16:48.204 "state": "enabled", 00:16:48.204 "thread": "nvmf_tgt_poll_group_000", 00:16:48.204 "listen_address": { 00:16:48.204 "trtype": "TCP", 00:16:48.204 "adrfam": "IPv4", 00:16:48.204 "traddr": "10.0.0.2", 00:16:48.204 "trsvcid": "4420" 00:16:48.204 }, 00:16:48.204 "peer_address": { 00:16:48.204 "trtype": "TCP", 00:16:48.204 "adrfam": "IPv4", 00:16:48.204 "traddr": "10.0.0.1", 00:16:48.204 "trsvcid": "40546" 00:16:48.204 }, 00:16:48.204 "auth": { 00:16:48.204 "state": "completed", 00:16:48.204 "digest": "sha256", 00:16:48.204 "dhgroup": "ffdhe4096" 00:16:48.204 } 00:16:48.204 } 00:16:48.204 ]' 00:16:48.204 23:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.204 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.204 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.204 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.204 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.204 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.204 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.204 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.463 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.031 23:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.290 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.550 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.550 { 00:16:49.550 "cntlid": 27, 00:16:49.550 "qid": 0, 00:16:49.550 "state": "enabled", 00:16:49.550 "thread": "nvmf_tgt_poll_group_000", 00:16:49.550 "listen_address": { 00:16:49.550 "trtype": "TCP", 00:16:49.550 "adrfam": "IPv4", 00:16:49.550 "traddr": "10.0.0.2", 00:16:49.550 "trsvcid": "4420" 00:16:49.550 }, 00:16:49.550 "peer_address": { 00:16:49.550 "trtype": "TCP", 00:16:49.550 "adrfam": "IPv4", 00:16:49.550 "traddr": "10.0.0.1", 00:16:49.550 "trsvcid": "43704" 00:16:49.550 }, 00:16:49.550 "auth": { 00:16:49.550 "state": "completed", 00:16:49.550 "digest": "sha256", 00:16:49.550 "dhgroup": "ffdhe4096" 00:16:49.550 } 00:16:49.550 } 00:16:49.550 ]' 00:16:49.550 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.824 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.824 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.824 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.824 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.824 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.824 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.824 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.141 23:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.710 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.969 00:16:50.969 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.969 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.969 23:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.228 { 00:16:51.228 "cntlid": 29, 00:16:51.228 "qid": 0, 00:16:51.228 "state": "enabled", 00:16:51.228 "thread": "nvmf_tgt_poll_group_000", 00:16:51.228 "listen_address": { 00:16:51.228 "trtype": "TCP", 00:16:51.228 "adrfam": "IPv4", 00:16:51.228 "traddr": "10.0.0.2", 00:16:51.228 "trsvcid": "4420" 00:16:51.228 }, 00:16:51.228 "peer_address": { 00:16:51.228 "trtype": "TCP", 00:16:51.228 "adrfam": "IPv4", 00:16:51.228 "traddr": "10.0.0.1", 00:16:51.228 "trsvcid": "43722" 00:16:51.228 }, 00:16:51.228 "auth": { 00:16:51.228 "state": "completed", 00:16:51.228 "digest": "sha256", 00:16:51.228 "dhgroup": "ffdhe4096" 00:16:51.228 } 00:16:51.228 } 00:16:51.228 ]' 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.228 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.486 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.486 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.486 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.486 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.053 23:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.312 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.571 00:16:52.571 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.571 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.571 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.831 { 00:16:52.831 "cntlid": 31, 00:16:52.831 "qid": 0, 00:16:52.831 "state": "enabled", 00:16:52.831 "thread": "nvmf_tgt_poll_group_000", 00:16:52.831 "listen_address": { 00:16:52.831 "trtype": "TCP", 00:16:52.831 "adrfam": "IPv4", 00:16:52.831 "traddr": "10.0.0.2", 00:16:52.831 "trsvcid": "4420" 00:16:52.831 }, 00:16:52.831 "peer_address": { 00:16:52.831 "trtype": "TCP", 00:16:52.831 "adrfam": "IPv4", 00:16:52.831 "traddr": "10.0.0.1", 00:16:52.831 "trsvcid": "43744" 00:16:52.831 }, 00:16:52.831 "auth": { 00:16:52.831 "state": "completed", 00:16:52.831 "digest": "sha256", 00:16:52.831 "dhgroup": "ffdhe4096" 00:16:52.831 } 00:16:52.831 } 00:16:52.831 ]' 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.831 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.090 23:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.658 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.917 23:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.177 00:16:54.177 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.177 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.177 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.437 { 00:16:54.437 "cntlid": 33, 00:16:54.437 "qid": 0, 00:16:54.437 "state": "enabled", 00:16:54.437 "thread": "nvmf_tgt_poll_group_000", 00:16:54.437 "listen_address": { 00:16:54.437 "trtype": "TCP", 00:16:54.437 "adrfam": "IPv4", 00:16:54.437 "traddr": "10.0.0.2", 00:16:54.437 "trsvcid": "4420" 00:16:54.437 }, 00:16:54.437 "peer_address": { 00:16:54.437 "trtype": "TCP", 00:16:54.437 "adrfam": "IPv4", 00:16:54.437 "traddr": "10.0.0.1", 00:16:54.437 "trsvcid": "43774" 00:16:54.437 }, 00:16:54.437 "auth": { 00:16:54.437 "state": "completed", 00:16:54.437 "digest": "sha256", 00:16:54.437 "dhgroup": "ffdhe6144" 00:16:54.437 } 00:16:54.437 } 00:16:54.437 ]' 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.437 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.698 23:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.267 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.526 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.785 00:16:55.785 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.785 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.785 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.045 { 00:16:56.045 "cntlid": 35, 00:16:56.045 "qid": 0, 00:16:56.045 "state": "enabled", 00:16:56.045 "thread": "nvmf_tgt_poll_group_000", 00:16:56.045 "listen_address": { 00:16:56.045 "trtype": "TCP", 00:16:56.045 "adrfam": "IPv4", 00:16:56.045 "traddr": "10.0.0.2", 00:16:56.045 "trsvcid": "4420" 00:16:56.045 }, 00:16:56.045 "peer_address": { 00:16:56.045 "trtype": "TCP", 00:16:56.045 "adrfam": "IPv4", 00:16:56.045 "traddr": "10.0.0.1", 00:16:56.045 "trsvcid": "43804" 00:16:56.045 }, 00:16:56.045 "auth": { 00:16:56.045 "state": "completed", 00:16:56.045 "digest": "sha256", 00:16:56.045 "dhgroup": "ffdhe6144" 00:16:56.045 } 00:16:56.045 } 00:16:56.045 ]' 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.045 23:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.304 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:16:56.872 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.872 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.872 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.872 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.872 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:56.873 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.132 23:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:57.132 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.132 23:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.391 00:16:57.391 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.391 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.391 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.651 { 00:16:57.651 "cntlid": 37, 00:16:57.651 "qid": 0, 00:16:57.651 "state": "enabled", 00:16:57.651 "thread": "nvmf_tgt_poll_group_000", 00:16:57.651 "listen_address": { 00:16:57.651 "trtype": "TCP", 00:16:57.651 "adrfam": "IPv4", 00:16:57.651 "traddr": "10.0.0.2", 00:16:57.651 "trsvcid": "4420" 00:16:57.651 }, 00:16:57.651 "peer_address": { 00:16:57.651 "trtype": "TCP", 00:16:57.651 "adrfam": "IPv4", 00:16:57.651 "traddr": "10.0.0.1", 00:16:57.651 "trsvcid": "43834" 00:16:57.651 }, 00:16:57.651 "auth": { 00:16:57.651 "state": "completed", 00:16:57.651 "digest": "sha256", 00:16:57.651 "dhgroup": "ffdhe6144" 00:16:57.651 } 00:16:57.651 } 00:16:57.651 ]' 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.651 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.910 23:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:58.480 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.739 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:58.739 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.739 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.997 00:16:58.997 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.997 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.997 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.256 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.256 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.256 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:59.256 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.256 23:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:59.256 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.256 { 00:16:59.256 "cntlid": 39, 00:16:59.256 "qid": 0, 00:16:59.256 "state": "enabled", 00:16:59.256 "thread": "nvmf_tgt_poll_group_000", 00:16:59.256 "listen_address": { 00:16:59.256 "trtype": "TCP", 00:16:59.256 "adrfam": "IPv4", 00:16:59.256 "traddr": "10.0.0.2", 00:16:59.256 "trsvcid": "4420" 00:16:59.256 }, 00:16:59.256 "peer_address": { 00:16:59.256 "trtype": "TCP", 00:16:59.256 "adrfam": "IPv4", 00:16:59.256 "traddr": "10.0.0.1", 00:16:59.256 "trsvcid": "53176" 00:16:59.256 }, 00:16:59.256 "auth": { 00:16:59.256 "state": "completed", 00:16:59.256 "digest": "sha256", 00:16:59.256 "dhgroup": "ffdhe6144" 00:16:59.256 } 00:16:59.256 } 00:16:59.256 ]' 00:16:59.256 23:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.256 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.256 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.256 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.256 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.256 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.256 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.256 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.516 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.090 23:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.090 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.660 00:17:00.660 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.660 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.660 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.918 { 00:17:00.918 "cntlid": 41, 00:17:00.918 "qid": 0, 00:17:00.918 "state": "enabled", 00:17:00.918 "thread": "nvmf_tgt_poll_group_000", 00:17:00.918 "listen_address": { 00:17:00.918 "trtype": "TCP", 00:17:00.918 "adrfam": "IPv4", 00:17:00.918 "traddr": "10.0.0.2", 00:17:00.918 "trsvcid": "4420" 00:17:00.918 }, 00:17:00.918 "peer_address": { 00:17:00.918 "trtype": "TCP", 00:17:00.918 "adrfam": "IPv4", 00:17:00.918 "traddr": "10.0.0.1", 00:17:00.918 "trsvcid": "53198" 00:17:00.918 }, 00:17:00.918 "auth": { 00:17:00.918 "state": "completed", 00:17:00.918 "digest": "sha256", 00:17:00.918 "dhgroup": "ffdhe8192" 00:17:00.918 } 00:17:00.918 } 00:17:00.918 ]' 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.918 23:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.177 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:01.744 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.003 23:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.570 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.570 { 00:17:02.570 "cntlid": 43, 00:17:02.570 "qid": 0, 00:17:02.570 "state": "enabled", 00:17:02.570 "thread": "nvmf_tgt_poll_group_000", 00:17:02.570 "listen_address": { 00:17:02.570 "trtype": "TCP", 00:17:02.570 "adrfam": "IPv4", 00:17:02.570 "traddr": "10.0.0.2", 00:17:02.570 "trsvcid": "4420" 00:17:02.570 }, 00:17:02.570 "peer_address": { 00:17:02.570 "trtype": "TCP", 00:17:02.570 "adrfam": "IPv4", 00:17:02.570 "traddr": "10.0.0.1", 00:17:02.570 "trsvcid": "53224" 00:17:02.570 }, 00:17:02.570 "auth": { 00:17:02.570 "state": "completed", 00:17:02.570 "digest": "sha256", 00:17:02.570 "dhgroup": "ffdhe8192" 00:17:02.570 } 00:17:02.570 } 00:17:02.570 ]' 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.570 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.828 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.828 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.828 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.828 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.828 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.828 23:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:03.396 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.396 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.396 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:03.396 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.655 23:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.223 00:17:04.223 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.223 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.223 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.481 { 00:17:04.481 "cntlid": 45, 00:17:04.481 "qid": 0, 00:17:04.481 "state": "enabled", 00:17:04.481 "thread": "nvmf_tgt_poll_group_000", 00:17:04.481 "listen_address": { 00:17:04.481 "trtype": "TCP", 00:17:04.481 "adrfam": "IPv4", 00:17:04.481 "traddr": "10.0.0.2", 00:17:04.481 "trsvcid": "4420" 00:17:04.481 }, 00:17:04.481 "peer_address": { 00:17:04.481 "trtype": "TCP", 00:17:04.481 "adrfam": "IPv4", 00:17:04.481 "traddr": "10.0.0.1", 00:17:04.481 "trsvcid": "53252" 00:17:04.481 }, 00:17:04.481 "auth": { 00:17:04.481 "state": "completed", 00:17:04.481 "digest": "sha256", 00:17:04.481 "dhgroup": "ffdhe8192" 00:17:04.481 } 00:17:04.481 } 00:17:04.481 ]' 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.481 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.739 23:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:05.306 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.564 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:05.564 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.564 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.823 00:17:05.823 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.823 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.823 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.082 { 00:17:06.082 "cntlid": 47, 00:17:06.082 "qid": 0, 00:17:06.082 "state": "enabled", 00:17:06.082 "thread": "nvmf_tgt_poll_group_000", 00:17:06.082 "listen_address": { 00:17:06.082 "trtype": "TCP", 00:17:06.082 "adrfam": "IPv4", 00:17:06.082 "traddr": "10.0.0.2", 00:17:06.082 "trsvcid": "4420" 00:17:06.082 }, 00:17:06.082 "peer_address": { 00:17:06.082 "trtype": "TCP", 00:17:06.082 "adrfam": "IPv4", 00:17:06.082 "traddr": "10.0.0.1", 00:17:06.082 "trsvcid": "53266" 00:17:06.082 }, 00:17:06.082 "auth": { 00:17:06.082 "state": "completed", 00:17:06.082 "digest": "sha256", 00:17:06.082 "dhgroup": "ffdhe8192" 00:17:06.082 } 00:17:06.082 } 00:17:06.082 ]' 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.082 23:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.082 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:06.082 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.340 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.956 23:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:07.215 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.216 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:07.216 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.216 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.475 00:17:07.475 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.475 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.475 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.475 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.475 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.475 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:07.475 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.735 { 00:17:07.735 "cntlid": 49, 00:17:07.735 "qid": 0, 00:17:07.735 "state": "enabled", 00:17:07.735 "thread": "nvmf_tgt_poll_group_000", 00:17:07.735 "listen_address": { 00:17:07.735 "trtype": "TCP", 00:17:07.735 "adrfam": "IPv4", 00:17:07.735 "traddr": "10.0.0.2", 00:17:07.735 "trsvcid": "4420" 00:17:07.735 }, 00:17:07.735 "peer_address": { 00:17:07.735 "trtype": "TCP", 00:17:07.735 "adrfam": "IPv4", 00:17:07.735 "traddr": "10.0.0.1", 00:17:07.735 "trsvcid": "53280" 00:17:07.735 }, 00:17:07.735 "auth": { 00:17:07.735 "state": "completed", 00:17:07.735 "digest": "sha384", 00:17:07.735 "dhgroup": "null" 00:17:07.735 } 00:17:07.735 } 00:17:07.735 ]' 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.735 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.995 23:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.564 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.823 00:17:08.823 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.823 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.823 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.082 { 00:17:09.082 "cntlid": 51, 00:17:09.082 "qid": 0, 00:17:09.082 "state": "enabled", 00:17:09.082 "thread": "nvmf_tgt_poll_group_000", 00:17:09.082 "listen_address": { 00:17:09.082 "trtype": "TCP", 00:17:09.082 "adrfam": "IPv4", 00:17:09.082 "traddr": "10.0.0.2", 00:17:09.082 "trsvcid": "4420" 00:17:09.082 }, 00:17:09.082 "peer_address": { 00:17:09.082 "trtype": "TCP", 00:17:09.082 "adrfam": "IPv4", 00:17:09.082 "traddr": "10.0.0.1", 00:17:09.082 "trsvcid": "40242" 00:17:09.082 }, 00:17:09.082 "auth": { 00:17:09.082 "state": "completed", 00:17:09.082 "digest": "sha384", 00:17:09.082 "dhgroup": "null" 00:17:09.082 } 00:17:09.082 } 00:17:09.082 ]' 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.082 23:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.082 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:09.082 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.341 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.341 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.341 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.342 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:09.910 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.910 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.910 23:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:09.910 23:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.910 23:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:09.910 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.910 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:09.911 23:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:10.169 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.170 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.427 00:17:10.428 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.428 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.428 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.686 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.686 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.686 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:10.686 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.686 23:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:10.686 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.686 { 00:17:10.686 "cntlid": 53, 00:17:10.686 "qid": 0, 00:17:10.686 "state": "enabled", 00:17:10.686 "thread": "nvmf_tgt_poll_group_000", 00:17:10.686 "listen_address": { 00:17:10.686 "trtype": "TCP", 00:17:10.686 "adrfam": "IPv4", 00:17:10.686 "traddr": "10.0.0.2", 00:17:10.686 "trsvcid": "4420" 00:17:10.686 }, 00:17:10.686 "peer_address": { 00:17:10.686 "trtype": "TCP", 00:17:10.686 "adrfam": "IPv4", 00:17:10.686 "traddr": "10.0.0.1", 00:17:10.686 "trsvcid": "40256" 00:17:10.686 }, 00:17:10.686 "auth": { 00:17:10.686 "state": "completed", 00:17:10.686 "digest": "sha384", 00:17:10.686 "dhgroup": "null" 00:17:10.686 } 00:17:10.686 } 00:17:10.686 ]' 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.687 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.945 23:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.513 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.772 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.773 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.773 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.032 { 00:17:12.032 "cntlid": 55, 00:17:12.032 "qid": 0, 00:17:12.032 "state": "enabled", 00:17:12.032 "thread": "nvmf_tgt_poll_group_000", 00:17:12.032 "listen_address": { 00:17:12.032 "trtype": "TCP", 00:17:12.032 "adrfam": "IPv4", 00:17:12.032 "traddr": "10.0.0.2", 00:17:12.032 "trsvcid": "4420" 00:17:12.032 }, 00:17:12.032 "peer_address": { 00:17:12.032 "trtype": "TCP", 00:17:12.032 "adrfam": "IPv4", 00:17:12.032 "traddr": "10.0.0.1", 00:17:12.032 "trsvcid": "40266" 00:17:12.032 }, 00:17:12.032 "auth": { 00:17:12.032 "state": "completed", 00:17:12.032 "digest": "sha384", 00:17:12.032 "dhgroup": "null" 00:17:12.032 } 00:17:12.032 } 00:17:12.032 ]' 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.032 23:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.291 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:12.291 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.291 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.291 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.291 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.291 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.860 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.119 23:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.379 00:17:13.379 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.380 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.380 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.640 { 00:17:13.640 "cntlid": 57, 00:17:13.640 "qid": 0, 00:17:13.640 "state": "enabled", 00:17:13.640 "thread": "nvmf_tgt_poll_group_000", 00:17:13.640 "listen_address": { 00:17:13.640 "trtype": "TCP", 00:17:13.640 "adrfam": "IPv4", 00:17:13.640 "traddr": "10.0.0.2", 00:17:13.640 "trsvcid": "4420" 00:17:13.640 }, 00:17:13.640 "peer_address": { 00:17:13.640 "trtype": "TCP", 00:17:13.640 "adrfam": "IPv4", 00:17:13.640 "traddr": "10.0.0.1", 00:17:13.640 "trsvcid": "40278" 00:17:13.640 }, 00:17:13.640 "auth": { 00:17:13.640 "state": "completed", 00:17:13.640 "digest": "sha384", 00:17:13.640 "dhgroup": "ffdhe2048" 00:17:13.640 } 00:17:13.640 } 00:17:13.640 ]' 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.640 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.898 23:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:14.464 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.732 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:14.732 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.732 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.732 00:17:14.732 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.732 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.732 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.991 { 00:17:14.991 "cntlid": 59, 00:17:14.991 "qid": 0, 00:17:14.991 "state": "enabled", 00:17:14.991 "thread": "nvmf_tgt_poll_group_000", 00:17:14.991 "listen_address": { 00:17:14.991 "trtype": "TCP", 00:17:14.991 "adrfam": "IPv4", 00:17:14.991 "traddr": "10.0.0.2", 00:17:14.991 "trsvcid": "4420" 00:17:14.991 }, 00:17:14.991 "peer_address": { 00:17:14.991 "trtype": "TCP", 00:17:14.991 "adrfam": "IPv4", 00:17:14.991 "traddr": "10.0.0.1", 00:17:14.991 "trsvcid": "40302" 00:17:14.991 }, 00:17:14.991 "auth": { 00:17:14.991 "state": "completed", 00:17:14.991 "digest": "sha384", 00:17:14.991 "dhgroup": "ffdhe2048" 00:17:14.991 } 00:17:14.991 } 00:17:14.991 ]' 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.991 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:15.250 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.250 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.250 23:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.250 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:15.817 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.817 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.817 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:15.817 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.074 23:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.332 00:17:16.332 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.332 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.332 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.591 { 00:17:16.591 "cntlid": 61, 00:17:16.591 "qid": 0, 00:17:16.591 "state": "enabled", 00:17:16.591 "thread": "nvmf_tgt_poll_group_000", 00:17:16.591 "listen_address": { 00:17:16.591 "trtype": "TCP", 00:17:16.591 "adrfam": "IPv4", 00:17:16.591 "traddr": "10.0.0.2", 00:17:16.591 "trsvcid": "4420" 00:17:16.591 }, 00:17:16.591 "peer_address": { 00:17:16.591 "trtype": "TCP", 00:17:16.591 "adrfam": "IPv4", 00:17:16.591 "traddr": "10.0.0.1", 00:17:16.591 "trsvcid": "40340" 00:17:16.591 }, 00:17:16.591 "auth": { 00:17:16.591 "state": "completed", 00:17:16.591 "digest": "sha384", 00:17:16.591 "dhgroup": "ffdhe2048" 00:17:16.591 } 00:17:16.591 } 00:17:16.591 ]' 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.591 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.850 23:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.418 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.677 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.677 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.936 { 00:17:17.936 "cntlid": 63, 00:17:17.936 "qid": 0, 00:17:17.936 "state": "enabled", 00:17:17.936 "thread": "nvmf_tgt_poll_group_000", 00:17:17.936 "listen_address": { 00:17:17.936 "trtype": "TCP", 00:17:17.936 "adrfam": "IPv4", 00:17:17.936 "traddr": "10.0.0.2", 00:17:17.936 "trsvcid": "4420" 00:17:17.936 }, 00:17:17.936 "peer_address": { 00:17:17.936 "trtype": "TCP", 00:17:17.936 "adrfam": "IPv4", 00:17:17.936 "traddr": "10.0.0.1", 00:17:17.936 "trsvcid": "40360" 00:17:17.936 }, 00:17:17.936 "auth": { 00:17:17.936 "state": "completed", 00:17:17.936 "digest": "sha384", 00:17:17.936 "dhgroup": "ffdhe2048" 00:17:17.936 } 00:17:17.936 } 00:17:17.936 ]' 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.936 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.195 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.195 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.195 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.195 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.195 23:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.195 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:18.763 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.763 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.022 23:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.281 00:17:19.281 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.281 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.281 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.541 { 00:17:19.541 "cntlid": 65, 00:17:19.541 "qid": 0, 00:17:19.541 "state": "enabled", 00:17:19.541 "thread": "nvmf_tgt_poll_group_000", 00:17:19.541 "listen_address": { 00:17:19.541 "trtype": "TCP", 00:17:19.541 "adrfam": "IPv4", 00:17:19.541 "traddr": "10.0.0.2", 00:17:19.541 "trsvcid": "4420" 00:17:19.541 }, 00:17:19.541 "peer_address": { 00:17:19.541 "trtype": "TCP", 00:17:19.541 "adrfam": "IPv4", 00:17:19.541 "traddr": "10.0.0.1", 00:17:19.541 "trsvcid": "34374" 00:17:19.541 }, 00:17:19.541 "auth": { 00:17:19.541 "state": "completed", 00:17:19.541 "digest": "sha384", 00:17:19.541 "dhgroup": "ffdhe3072" 00:17:19.541 } 00:17:19.541 } 00:17:19.541 ]' 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.541 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.799 23:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.368 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.627 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.886 00:17:20.886 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.887 { 00:17:20.887 "cntlid": 67, 00:17:20.887 "qid": 0, 00:17:20.887 "state": "enabled", 00:17:20.887 "thread": "nvmf_tgt_poll_group_000", 00:17:20.887 "listen_address": { 00:17:20.887 "trtype": "TCP", 00:17:20.887 "adrfam": "IPv4", 00:17:20.887 "traddr": "10.0.0.2", 00:17:20.887 "trsvcid": "4420" 00:17:20.887 }, 00:17:20.887 "peer_address": { 00:17:20.887 "trtype": "TCP", 00:17:20.887 "adrfam": "IPv4", 00:17:20.887 "traddr": "10.0.0.1", 00:17:20.887 "trsvcid": "34392" 00:17:20.887 }, 00:17:20.887 "auth": { 00:17:20.887 "state": "completed", 00:17:20.887 "digest": "sha384", 00:17:20.887 "dhgroup": "ffdhe3072" 00:17:20.887 } 00:17:20.887 } 00:17:20.887 ]' 00:17:20.887 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.146 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.146 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.146 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:21.146 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.146 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.146 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.146 23:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.405 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.972 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.973 23:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.231 00:17:22.231 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.231 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.231 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.490 { 00:17:22.490 "cntlid": 69, 00:17:22.490 "qid": 0, 00:17:22.490 "state": "enabled", 00:17:22.490 "thread": "nvmf_tgt_poll_group_000", 00:17:22.490 "listen_address": { 00:17:22.490 "trtype": "TCP", 00:17:22.490 "adrfam": "IPv4", 00:17:22.490 "traddr": "10.0.0.2", 00:17:22.490 "trsvcid": "4420" 00:17:22.490 }, 00:17:22.490 "peer_address": { 00:17:22.490 "trtype": "TCP", 00:17:22.490 "adrfam": "IPv4", 00:17:22.490 "traddr": "10.0.0.1", 00:17:22.490 "trsvcid": "34416" 00:17:22.490 }, 00:17:22.490 "auth": { 00:17:22.490 "state": "completed", 00:17:22.490 "digest": "sha384", 00:17:22.490 "dhgroup": "ffdhe3072" 00:17:22.490 } 00:17:22.490 } 00:17:22.490 ]' 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.490 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.748 23:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.315 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.600 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.859 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.859 { 00:17:23.859 "cntlid": 71, 00:17:23.859 "qid": 0, 00:17:23.859 "state": "enabled", 00:17:23.859 "thread": "nvmf_tgt_poll_group_000", 00:17:23.859 "listen_address": { 00:17:23.859 "trtype": "TCP", 00:17:23.859 "adrfam": "IPv4", 00:17:23.859 "traddr": "10.0.0.2", 00:17:23.859 "trsvcid": "4420" 00:17:23.859 }, 00:17:23.859 "peer_address": { 00:17:23.859 "trtype": "TCP", 00:17:23.859 "adrfam": "IPv4", 00:17:23.859 "traddr": "10.0.0.1", 00:17:23.859 "trsvcid": "34444" 00:17:23.859 }, 00:17:23.859 "auth": { 00:17:23.859 "state": "completed", 00:17:23.859 "digest": "sha384", 00:17:23.859 "dhgroup": "ffdhe3072" 00:17:23.859 } 00:17:23.859 } 00:17:23.859 ]' 00:17:23.859 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.117 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.117 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.117 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.117 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.117 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.117 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.117 23:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.377 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.945 23:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.204 00:17:25.204 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.204 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.204 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.462 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.462 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.462 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:25.462 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.462 23:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:25.462 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.462 { 00:17:25.462 "cntlid": 73, 00:17:25.462 "qid": 0, 00:17:25.462 "state": "enabled", 00:17:25.462 "thread": "nvmf_tgt_poll_group_000", 00:17:25.462 "listen_address": { 00:17:25.462 "trtype": "TCP", 00:17:25.462 "adrfam": "IPv4", 00:17:25.462 "traddr": "10.0.0.2", 00:17:25.462 "trsvcid": "4420" 00:17:25.462 }, 00:17:25.462 "peer_address": { 00:17:25.463 "trtype": "TCP", 00:17:25.463 "adrfam": "IPv4", 00:17:25.463 "traddr": "10.0.0.1", 00:17:25.463 "trsvcid": "34476" 00:17:25.463 }, 00:17:25.463 "auth": { 00:17:25.463 "state": "completed", 00:17:25.463 "digest": "sha384", 00:17:25.463 "dhgroup": "ffdhe4096" 00:17:25.463 } 00:17:25.463 } 00:17:25.463 ]' 00:17:25.463 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.463 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.463 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.463 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.722 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.722 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.722 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.722 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.722 23:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.289 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.550 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.809 00:17:26.809 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.809 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.809 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.069 { 00:17:27.069 "cntlid": 75, 00:17:27.069 "qid": 0, 00:17:27.069 "state": "enabled", 00:17:27.069 "thread": "nvmf_tgt_poll_group_000", 00:17:27.069 "listen_address": { 00:17:27.069 "trtype": "TCP", 00:17:27.069 "adrfam": "IPv4", 00:17:27.069 "traddr": "10.0.0.2", 00:17:27.069 "trsvcid": "4420" 00:17:27.069 }, 00:17:27.069 "peer_address": { 00:17:27.069 "trtype": "TCP", 00:17:27.069 "adrfam": "IPv4", 00:17:27.069 "traddr": "10.0.0.1", 00:17:27.069 "trsvcid": "34516" 00:17:27.069 }, 00:17:27.069 "auth": { 00:17:27.069 "state": "completed", 00:17:27.069 "digest": "sha384", 00:17:27.069 "dhgroup": "ffdhe4096" 00:17:27.069 } 00:17:27.069 } 00:17:27.069 ]' 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.069 23:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.069 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.069 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.069 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.329 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.899 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.158 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:28.158 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.158 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.158 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:28.158 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:28.159 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.159 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.159 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:28.159 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.159 23:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:28.159 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.159 23:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.418 00:17:28.418 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.418 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.418 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.418 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.418 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.418 23:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:28.418 23:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.678 { 00:17:28.678 "cntlid": 77, 00:17:28.678 "qid": 0, 00:17:28.678 "state": "enabled", 00:17:28.678 "thread": "nvmf_tgt_poll_group_000", 00:17:28.678 "listen_address": { 00:17:28.678 "trtype": "TCP", 00:17:28.678 "adrfam": "IPv4", 00:17:28.678 "traddr": "10.0.0.2", 00:17:28.678 "trsvcid": "4420" 00:17:28.678 }, 00:17:28.678 "peer_address": { 00:17:28.678 "trtype": "TCP", 00:17:28.678 "adrfam": "IPv4", 00:17:28.678 "traddr": "10.0.0.1", 00:17:28.678 "trsvcid": "34820" 00:17:28.678 }, 00:17:28.678 "auth": { 00:17:28.678 "state": "completed", 00:17:28.678 "digest": "sha384", 00:17:28.678 "dhgroup": "ffdhe4096" 00:17:28.678 } 00:17:28.678 } 00:17:28.678 ]' 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.678 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.937 23:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.506 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.765 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.024 { 00:17:30.024 "cntlid": 79, 00:17:30.024 "qid": 0, 00:17:30.024 "state": "enabled", 00:17:30.024 "thread": "nvmf_tgt_poll_group_000", 00:17:30.024 "listen_address": { 00:17:30.024 "trtype": "TCP", 00:17:30.024 "adrfam": "IPv4", 00:17:30.024 "traddr": "10.0.0.2", 00:17:30.024 "trsvcid": "4420" 00:17:30.024 }, 00:17:30.024 "peer_address": { 00:17:30.024 "trtype": "TCP", 00:17:30.024 "adrfam": "IPv4", 00:17:30.024 "traddr": "10.0.0.1", 00:17:30.024 "trsvcid": "34844" 00:17:30.024 }, 00:17:30.024 "auth": { 00:17:30.024 "state": "completed", 00:17:30.024 "digest": "sha384", 00:17:30.024 "dhgroup": "ffdhe4096" 00:17:30.024 } 00:17:30.024 } 00:17:30.024 ]' 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.024 23:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.284 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.284 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.284 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.284 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.284 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.284 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.853 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:31.112 23:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.112 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.373 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.632 { 00:17:31.632 "cntlid": 81, 00:17:31.632 "qid": 0, 00:17:31.632 "state": "enabled", 00:17:31.632 "thread": "nvmf_tgt_poll_group_000", 00:17:31.632 "listen_address": { 00:17:31.632 "trtype": "TCP", 00:17:31.632 "adrfam": "IPv4", 00:17:31.632 "traddr": "10.0.0.2", 00:17:31.632 "trsvcid": "4420" 00:17:31.632 }, 00:17:31.632 "peer_address": { 00:17:31.632 "trtype": "TCP", 00:17:31.632 "adrfam": "IPv4", 00:17:31.632 "traddr": "10.0.0.1", 00:17:31.632 "trsvcid": "34878" 00:17:31.632 }, 00:17:31.632 "auth": { 00:17:31.632 "state": "completed", 00:17:31.632 "digest": "sha384", 00:17:31.632 "dhgroup": "ffdhe6144" 00:17:31.632 } 00:17:31.632 } 00:17:31.632 ]' 00:17:31.632 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.891 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.891 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.891 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.891 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.891 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.891 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.892 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.150 23:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.718 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.285 00:17:33.285 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.285 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.285 23:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.285 { 00:17:33.285 "cntlid": 83, 00:17:33.285 "qid": 0, 00:17:33.285 "state": "enabled", 00:17:33.285 "thread": "nvmf_tgt_poll_group_000", 00:17:33.285 "listen_address": { 00:17:33.285 "trtype": "TCP", 00:17:33.285 "adrfam": "IPv4", 00:17:33.285 "traddr": "10.0.0.2", 00:17:33.285 "trsvcid": "4420" 00:17:33.285 }, 00:17:33.285 "peer_address": { 00:17:33.285 "trtype": "TCP", 00:17:33.285 "adrfam": "IPv4", 00:17:33.285 "traddr": "10.0.0.1", 00:17:33.285 "trsvcid": "34906" 00:17:33.285 }, 00:17:33.285 "auth": { 00:17:33.285 "state": "completed", 00:17:33.285 "digest": "sha384", 00:17:33.285 "dhgroup": "ffdhe6144" 00:17:33.285 } 00:17:33.285 } 00:17:33.285 ]' 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.285 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.544 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.544 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.544 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.544 23:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.111 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.372 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.633 00:17:34.634 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.634 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.634 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.892 { 00:17:34.892 "cntlid": 85, 00:17:34.892 "qid": 0, 00:17:34.892 "state": "enabled", 00:17:34.892 "thread": "nvmf_tgt_poll_group_000", 00:17:34.892 "listen_address": { 00:17:34.892 "trtype": "TCP", 00:17:34.892 "adrfam": "IPv4", 00:17:34.892 "traddr": "10.0.0.2", 00:17:34.892 "trsvcid": "4420" 00:17:34.892 }, 00:17:34.892 "peer_address": { 00:17:34.892 "trtype": "TCP", 00:17:34.892 "adrfam": "IPv4", 00:17:34.892 "traddr": "10.0.0.1", 00:17:34.892 "trsvcid": "34926" 00:17:34.892 }, 00:17:34.892 "auth": { 00:17:34.892 "state": "completed", 00:17:34.892 "digest": "sha384", 00:17:34.892 "dhgroup": "ffdhe6144" 00:17:34.892 } 00:17:34.892 } 00:17:34.892 ]' 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.892 23:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.149 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:35.716 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.717 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.717 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:35.717 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.717 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:35.717 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.717 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.717 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.976 23:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.235 00:17:36.235 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.235 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.235 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.495 { 00:17:36.495 "cntlid": 87, 00:17:36.495 "qid": 0, 00:17:36.495 "state": "enabled", 00:17:36.495 "thread": "nvmf_tgt_poll_group_000", 00:17:36.495 "listen_address": { 00:17:36.495 "trtype": "TCP", 00:17:36.495 "adrfam": "IPv4", 00:17:36.495 "traddr": "10.0.0.2", 00:17:36.495 "trsvcid": "4420" 00:17:36.495 }, 00:17:36.495 "peer_address": { 00:17:36.495 "trtype": "TCP", 00:17:36.495 "adrfam": "IPv4", 00:17:36.495 "traddr": "10.0.0.1", 00:17:36.495 "trsvcid": "34940" 00:17:36.495 }, 00:17:36.495 "auth": { 00:17:36.495 "state": "completed", 00:17:36.495 "digest": "sha384", 00:17:36.495 "dhgroup": "ffdhe6144" 00:17:36.495 } 00:17:36.495 } 00:17:36.495 ]' 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.495 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.754 23:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.322 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.581 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.149 00:17:38.149 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.149 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.149 23:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.149 { 00:17:38.149 "cntlid": 89, 00:17:38.149 "qid": 0, 00:17:38.149 "state": "enabled", 00:17:38.149 "thread": "nvmf_tgt_poll_group_000", 00:17:38.149 "listen_address": { 00:17:38.149 "trtype": "TCP", 00:17:38.149 "adrfam": "IPv4", 00:17:38.149 "traddr": "10.0.0.2", 00:17:38.149 "trsvcid": "4420" 00:17:38.149 }, 00:17:38.149 "peer_address": { 00:17:38.149 "trtype": "TCP", 00:17:38.149 "adrfam": "IPv4", 00:17:38.149 "traddr": "10.0.0.1", 00:17:38.149 "trsvcid": "34964" 00:17:38.149 }, 00:17:38.149 "auth": { 00:17:38.149 "state": "completed", 00:17:38.149 "digest": "sha384", 00:17:38.149 "dhgroup": "ffdhe8192" 00:17:38.149 } 00:17:38.149 } 00:17:38.149 ]' 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.149 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.408 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.408 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.408 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.408 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.975 23:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.233 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.801 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.801 { 00:17:39.801 "cntlid": 91, 00:17:39.801 "qid": 0, 00:17:39.801 "state": "enabled", 00:17:39.801 "thread": "nvmf_tgt_poll_group_000", 00:17:39.801 "listen_address": { 00:17:39.801 "trtype": "TCP", 00:17:39.801 "adrfam": "IPv4", 00:17:39.801 "traddr": "10.0.0.2", 00:17:39.801 "trsvcid": "4420" 00:17:39.801 }, 00:17:39.801 "peer_address": { 00:17:39.801 "trtype": "TCP", 00:17:39.801 "adrfam": "IPv4", 00:17:39.801 "traddr": "10.0.0.1", 00:17:39.801 "trsvcid": "42164" 00:17:39.801 }, 00:17:39.801 "auth": { 00:17:39.801 "state": "completed", 00:17:39.801 "digest": "sha384", 00:17:39.801 "dhgroup": "ffdhe8192" 00:17:39.801 } 00:17:39.801 } 00:17:39.801 ]' 00:17:39.801 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.061 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.061 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.061 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.061 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.061 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.061 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.061 23:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.349 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.919 23:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.487 00:17:41.487 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.487 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.487 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.747 { 00:17:41.747 "cntlid": 93, 00:17:41.747 "qid": 0, 00:17:41.747 "state": "enabled", 00:17:41.747 "thread": "nvmf_tgt_poll_group_000", 00:17:41.747 "listen_address": { 00:17:41.747 "trtype": "TCP", 00:17:41.747 "adrfam": "IPv4", 00:17:41.747 "traddr": "10.0.0.2", 00:17:41.747 "trsvcid": "4420" 00:17:41.747 }, 00:17:41.747 "peer_address": { 00:17:41.747 "trtype": "TCP", 00:17:41.747 "adrfam": "IPv4", 00:17:41.747 "traddr": "10.0.0.1", 00:17:41.747 "trsvcid": "42192" 00:17:41.747 }, 00:17:41.747 "auth": { 00:17:41.747 "state": "completed", 00:17:41.747 "digest": "sha384", 00:17:41.747 "dhgroup": "ffdhe8192" 00:17:41.747 } 00:17:41.747 } 00:17:41.747 ]' 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.747 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.007 23:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.575 23:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.143 00:17:43.143 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.143 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.143 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.402 { 00:17:43.402 "cntlid": 95, 00:17:43.402 "qid": 0, 00:17:43.402 "state": "enabled", 00:17:43.402 "thread": "nvmf_tgt_poll_group_000", 00:17:43.402 "listen_address": { 00:17:43.402 "trtype": "TCP", 00:17:43.402 "adrfam": "IPv4", 00:17:43.402 "traddr": "10.0.0.2", 00:17:43.402 "trsvcid": "4420" 00:17:43.402 }, 00:17:43.402 "peer_address": { 00:17:43.402 "trtype": "TCP", 00:17:43.402 "adrfam": "IPv4", 00:17:43.402 "traddr": "10.0.0.1", 00:17:43.402 "trsvcid": "42238" 00:17:43.402 }, 00:17:43.402 "auth": { 00:17:43.402 "state": "completed", 00:17:43.402 "digest": "sha384", 00:17:43.402 "dhgroup": "ffdhe8192" 00:17:43.402 } 00:17:43.402 } 00:17:43.402 ]' 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.402 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.661 23:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.229 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.486 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.745 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.745 { 00:17:44.745 "cntlid": 97, 00:17:44.745 "qid": 0, 00:17:44.745 "state": "enabled", 00:17:44.745 "thread": "nvmf_tgt_poll_group_000", 00:17:44.745 "listen_address": { 00:17:44.745 "trtype": "TCP", 00:17:44.745 "adrfam": "IPv4", 00:17:44.745 "traddr": "10.0.0.2", 00:17:44.745 "trsvcid": "4420" 00:17:44.745 }, 00:17:44.745 "peer_address": { 00:17:44.745 "trtype": "TCP", 00:17:44.745 "adrfam": "IPv4", 00:17:44.745 "traddr": "10.0.0.1", 00:17:44.745 "trsvcid": "42276" 00:17:44.745 }, 00:17:44.745 "auth": { 00:17:44.745 "state": "completed", 00:17:44.745 "digest": "sha512", 00:17:44.745 "dhgroup": "null" 00:17:44.745 } 00:17:44.745 } 00:17:44.745 ]' 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.745 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.004 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.004 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:45.004 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.004 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.004 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.004 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.004 23:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.570 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.826 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.084 00:17:46.084 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.084 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.084 23:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.342 { 00:17:46.342 "cntlid": 99, 00:17:46.342 "qid": 0, 00:17:46.342 "state": "enabled", 00:17:46.342 "thread": "nvmf_tgt_poll_group_000", 00:17:46.342 "listen_address": { 00:17:46.342 "trtype": "TCP", 00:17:46.342 "adrfam": "IPv4", 00:17:46.342 "traddr": "10.0.0.2", 00:17:46.342 "trsvcid": "4420" 00:17:46.342 }, 00:17:46.342 "peer_address": { 00:17:46.342 "trtype": "TCP", 00:17:46.342 "adrfam": "IPv4", 00:17:46.342 "traddr": "10.0.0.1", 00:17:46.342 "trsvcid": "42300" 00:17:46.342 }, 00:17:46.342 "auth": { 00:17:46.342 "state": "completed", 00:17:46.342 "digest": "sha512", 00:17:46.342 "dhgroup": "null" 00:17:46.342 } 00:17:46.342 } 00:17:46.342 ]' 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.342 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.599 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:47.164 23:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.164 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.164 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:47.164 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.164 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:47.164 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.164 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.164 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.422 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.681 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.681 { 00:17:47.681 "cntlid": 101, 00:17:47.681 "qid": 0, 00:17:47.681 "state": "enabled", 00:17:47.681 "thread": "nvmf_tgt_poll_group_000", 00:17:47.681 "listen_address": { 00:17:47.681 "trtype": "TCP", 00:17:47.681 "adrfam": "IPv4", 00:17:47.681 "traddr": "10.0.0.2", 00:17:47.681 "trsvcid": "4420" 00:17:47.681 }, 00:17:47.681 "peer_address": { 00:17:47.681 "trtype": "TCP", 00:17:47.681 "adrfam": "IPv4", 00:17:47.681 "traddr": "10.0.0.1", 00:17:47.681 "trsvcid": "42312" 00:17:47.681 }, 00:17:47.681 "auth": { 00:17:47.681 "state": "completed", 00:17:47.681 "digest": "sha512", 00:17:47.681 "dhgroup": "null" 00:17:47.681 } 00:17:47.681 } 00:17:47.681 ]' 00:17:47.681 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.939 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.939 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.939 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:47.939 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.939 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.939 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.940 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.197 23:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:48.761 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.761 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.761 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:48.761 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.761 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:48.761 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.761 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.762 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:49.020 00:17:49.020 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.020 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.020 23:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.277 { 00:17:49.277 "cntlid": 103, 00:17:49.277 "qid": 0, 00:17:49.277 "state": "enabled", 00:17:49.277 "thread": "nvmf_tgt_poll_group_000", 00:17:49.277 "listen_address": { 00:17:49.277 "trtype": "TCP", 00:17:49.277 "adrfam": "IPv4", 00:17:49.277 "traddr": "10.0.0.2", 00:17:49.277 "trsvcid": "4420" 00:17:49.277 }, 00:17:49.277 "peer_address": { 00:17:49.277 "trtype": "TCP", 00:17:49.277 "adrfam": "IPv4", 00:17:49.277 "traddr": "10.0.0.1", 00:17:49.277 "trsvcid": "56344" 00:17:49.277 }, 00:17:49.277 "auth": { 00:17:49.277 "state": "completed", 00:17:49.277 "digest": "sha512", 00:17:49.277 "dhgroup": "null" 00:17:49.277 } 00:17:49.277 } 00:17:49.277 ]' 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.277 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.536 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.100 23:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.358 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.617 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.617 { 00:17:50.617 "cntlid": 105, 00:17:50.617 "qid": 0, 00:17:50.617 "state": "enabled", 00:17:50.617 "thread": "nvmf_tgt_poll_group_000", 00:17:50.617 "listen_address": { 00:17:50.617 "trtype": "TCP", 00:17:50.617 "adrfam": "IPv4", 00:17:50.617 "traddr": "10.0.0.2", 00:17:50.617 "trsvcid": "4420" 00:17:50.617 }, 00:17:50.617 "peer_address": { 00:17:50.617 "trtype": "TCP", 00:17:50.617 "adrfam": "IPv4", 00:17:50.617 "traddr": "10.0.0.1", 00:17:50.617 "trsvcid": "56378" 00:17:50.617 }, 00:17:50.617 "auth": { 00:17:50.617 "state": "completed", 00:17:50.617 "digest": "sha512", 00:17:50.617 "dhgroup": "ffdhe2048" 00:17:50.617 } 00:17:50.617 } 00:17:50.617 ]' 00:17:50.617 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.875 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.875 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.875 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.875 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.875 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.875 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.875 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.134 23:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.699 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.957 00:17:51.957 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.957 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.957 23:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.215 { 00:17:52.215 "cntlid": 107, 00:17:52.215 "qid": 0, 00:17:52.215 "state": "enabled", 00:17:52.215 "thread": "nvmf_tgt_poll_group_000", 00:17:52.215 "listen_address": { 00:17:52.215 "trtype": "TCP", 00:17:52.215 "adrfam": "IPv4", 00:17:52.215 "traddr": "10.0.0.2", 00:17:52.215 "trsvcid": "4420" 00:17:52.215 }, 00:17:52.215 "peer_address": { 00:17:52.215 "trtype": "TCP", 00:17:52.215 "adrfam": "IPv4", 00:17:52.215 "traddr": "10.0.0.1", 00:17:52.215 "trsvcid": "56394" 00:17:52.215 }, 00:17:52.215 "auth": { 00:17:52.215 "state": "completed", 00:17:52.215 "digest": "sha512", 00:17:52.215 "dhgroup": "ffdhe2048" 00:17:52.215 } 00:17:52.215 } 00:17:52.215 ]' 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:52.215 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.473 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.473 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.473 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.473 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.041 23:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.300 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.559 00:17:53.559 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.559 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.559 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.819 { 00:17:53.819 "cntlid": 109, 00:17:53.819 "qid": 0, 00:17:53.819 "state": "enabled", 00:17:53.819 "thread": "nvmf_tgt_poll_group_000", 00:17:53.819 "listen_address": { 00:17:53.819 "trtype": "TCP", 00:17:53.819 "adrfam": "IPv4", 00:17:53.819 "traddr": "10.0.0.2", 00:17:53.819 "trsvcid": "4420" 00:17:53.819 }, 00:17:53.819 "peer_address": { 00:17:53.819 "trtype": "TCP", 00:17:53.819 "adrfam": "IPv4", 00:17:53.819 "traddr": "10.0.0.1", 00:17:53.819 "trsvcid": "56430" 00:17:53.819 }, 00:17:53.819 "auth": { 00:17:53.819 "state": "completed", 00:17:53.819 "digest": "sha512", 00:17:53.819 "dhgroup": "ffdhe2048" 00:17:53.819 } 00:17:53.819 } 00:17:53.819 ]' 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.819 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.078 23:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.647 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:54.906 00:17:54.906 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.906 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.906 23:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.164 { 00:17:55.164 "cntlid": 111, 00:17:55.164 "qid": 0, 00:17:55.164 "state": "enabled", 00:17:55.164 "thread": "nvmf_tgt_poll_group_000", 00:17:55.164 "listen_address": { 00:17:55.164 "trtype": "TCP", 00:17:55.164 "adrfam": "IPv4", 00:17:55.164 "traddr": "10.0.0.2", 00:17:55.164 "trsvcid": "4420" 00:17:55.164 }, 00:17:55.164 "peer_address": { 00:17:55.164 "trtype": "TCP", 00:17:55.164 "adrfam": "IPv4", 00:17:55.164 "traddr": "10.0.0.1", 00:17:55.164 "trsvcid": "56464" 00:17:55.164 }, 00:17:55.164 "auth": { 00:17:55.164 "state": "completed", 00:17:55.164 "digest": "sha512", 00:17:55.164 "dhgroup": "ffdhe2048" 00:17:55.164 } 00:17:55.164 } 00:17:55.164 ]' 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.164 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.423 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.423 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.423 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.423 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.990 23:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.249 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.509 00:17:56.509 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.509 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.509 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.768 { 00:17:56.768 "cntlid": 113, 00:17:56.768 "qid": 0, 00:17:56.768 "state": "enabled", 00:17:56.768 "thread": "nvmf_tgt_poll_group_000", 00:17:56.768 "listen_address": { 00:17:56.768 "trtype": "TCP", 00:17:56.768 "adrfam": "IPv4", 00:17:56.768 "traddr": "10.0.0.2", 00:17:56.768 "trsvcid": "4420" 00:17:56.768 }, 00:17:56.768 "peer_address": { 00:17:56.768 "trtype": "TCP", 00:17:56.768 "adrfam": "IPv4", 00:17:56.768 "traddr": "10.0.0.1", 00:17:56.768 "trsvcid": "56486" 00:17:56.768 }, 00:17:56.768 "auth": { 00:17:56.768 "state": "completed", 00:17:56.768 "digest": "sha512", 00:17:56.768 "dhgroup": "ffdhe3072" 00:17:56.768 } 00:17:56.768 } 00:17:56.768 ]' 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.768 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.086 23:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.673 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.932 00:17:57.932 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.932 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.932 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.192 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.192 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.192 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:58.192 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.192 23:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:58.192 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.192 { 00:17:58.192 "cntlid": 115, 00:17:58.192 "qid": 0, 00:17:58.192 "state": "enabled", 00:17:58.192 "thread": "nvmf_tgt_poll_group_000", 00:17:58.192 "listen_address": { 00:17:58.192 "trtype": "TCP", 00:17:58.192 "adrfam": "IPv4", 00:17:58.192 "traddr": "10.0.0.2", 00:17:58.192 "trsvcid": "4420" 00:17:58.192 }, 00:17:58.192 "peer_address": { 00:17:58.192 "trtype": "TCP", 00:17:58.192 "adrfam": "IPv4", 00:17:58.192 "traddr": "10.0.0.1", 00:17:58.192 "trsvcid": "56532" 00:17:58.192 }, 00:17:58.192 "auth": { 00:17:58.192 "state": "completed", 00:17:58.192 "digest": "sha512", 00:17:58.192 "dhgroup": "ffdhe3072" 00:17:58.192 } 00:17:58.192 } 00:17:58.192 ]' 00:17:58.192 23:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.192 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.192 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.192 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:58.192 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.192 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.192 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.192 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.451 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.019 23:43:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.278 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.537 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.537 { 00:17:59.537 "cntlid": 117, 00:17:59.537 "qid": 0, 00:17:59.537 "state": "enabled", 00:17:59.537 "thread": "nvmf_tgt_poll_group_000", 00:17:59.537 "listen_address": { 00:17:59.537 "trtype": "TCP", 00:17:59.537 "adrfam": "IPv4", 00:17:59.537 "traddr": "10.0.0.2", 00:17:59.537 "trsvcid": "4420" 00:17:59.537 }, 00:17:59.537 "peer_address": { 00:17:59.537 "trtype": "TCP", 00:17:59.537 "adrfam": "IPv4", 00:17:59.537 "traddr": "10.0.0.1", 00:17:59.537 "trsvcid": "58266" 00:17:59.537 }, 00:17:59.537 "auth": { 00:17:59.537 "state": "completed", 00:17:59.537 "digest": "sha512", 00:17:59.537 "dhgroup": "ffdhe3072" 00:17:59.537 } 00:17:59.537 } 00:17:59.537 ]' 00:17:59.537 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.796 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.796 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.796 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.796 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.796 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.796 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.796 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.054 23:43:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.622 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.880 00:18:00.880 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.880 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.880 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.139 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.139 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.139 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:01.139 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.139 23:43:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:01.139 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.139 { 00:18:01.139 "cntlid": 119, 00:18:01.139 "qid": 0, 00:18:01.139 "state": "enabled", 00:18:01.139 "thread": "nvmf_tgt_poll_group_000", 00:18:01.139 "listen_address": { 00:18:01.139 "trtype": "TCP", 00:18:01.139 "adrfam": "IPv4", 00:18:01.139 "traddr": "10.0.0.2", 00:18:01.139 "trsvcid": "4420" 00:18:01.139 }, 00:18:01.139 "peer_address": { 00:18:01.139 "trtype": "TCP", 00:18:01.139 "adrfam": "IPv4", 00:18:01.139 "traddr": "10.0.0.1", 00:18:01.139 "trsvcid": "58290" 00:18:01.139 }, 00:18:01.139 "auth": { 00:18:01.139 "state": "completed", 00:18:01.139 "digest": "sha512", 00:18:01.139 "dhgroup": "ffdhe3072" 00:18:01.139 } 00:18:01.139 } 00:18:01.139 ]' 00:18:01.139 23:43:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.139 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.139 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.139 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.139 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.397 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.397 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.397 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.397 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.965 23:43:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.224 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.482 00:18:02.483 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.483 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.483 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.741 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.741 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.741 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:02.741 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.741 23:43:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:02.741 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.741 { 00:18:02.741 "cntlid": 121, 00:18:02.741 "qid": 0, 00:18:02.741 "state": "enabled", 00:18:02.741 "thread": "nvmf_tgt_poll_group_000", 00:18:02.741 "listen_address": { 00:18:02.741 "trtype": "TCP", 00:18:02.742 "adrfam": "IPv4", 00:18:02.742 "traddr": "10.0.0.2", 00:18:02.742 "trsvcid": "4420" 00:18:02.742 }, 00:18:02.742 "peer_address": { 00:18:02.742 "trtype": "TCP", 00:18:02.742 "adrfam": "IPv4", 00:18:02.742 "traddr": "10.0.0.1", 00:18:02.742 "trsvcid": "58308" 00:18:02.742 }, 00:18:02.742 "auth": { 00:18:02.742 "state": "completed", 00:18:02.742 "digest": "sha512", 00:18:02.742 "dhgroup": "ffdhe4096" 00:18:02.742 } 00:18:02.742 } 00:18:02.742 ]' 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.742 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.000 23:43:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:18:03.565 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.565 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.566 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:03.566 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.566 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:03.566 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.566 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.566 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.824 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.083 00:18:04.083 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.083 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.083 23:43:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.083 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.083 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.083 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:04.084 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.084 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:04.084 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.084 { 00:18:04.084 "cntlid": 123, 00:18:04.084 "qid": 0, 00:18:04.084 "state": "enabled", 00:18:04.084 "thread": "nvmf_tgt_poll_group_000", 00:18:04.084 "listen_address": { 00:18:04.084 "trtype": "TCP", 00:18:04.084 "adrfam": "IPv4", 00:18:04.084 "traddr": "10.0.0.2", 00:18:04.084 "trsvcid": "4420" 00:18:04.084 }, 00:18:04.084 "peer_address": { 00:18:04.084 "trtype": "TCP", 00:18:04.084 "adrfam": "IPv4", 00:18:04.084 "traddr": "10.0.0.1", 00:18:04.084 "trsvcid": "58340" 00:18:04.084 }, 00:18:04.084 "auth": { 00:18:04.084 "state": "completed", 00:18:04.084 "digest": "sha512", 00:18:04.084 "dhgroup": "ffdhe4096" 00:18:04.084 } 00:18:04.084 } 00:18:04.084 ]' 00:18:04.084 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.343 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.343 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.343 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:04.343 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.343 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.343 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.343 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.603 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.171 23:43:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.171 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:05.430 00:18:05.430 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.430 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.430 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.689 { 00:18:05.689 "cntlid": 125, 00:18:05.689 "qid": 0, 00:18:05.689 "state": "enabled", 00:18:05.689 "thread": "nvmf_tgt_poll_group_000", 00:18:05.689 "listen_address": { 00:18:05.689 "trtype": "TCP", 00:18:05.689 "adrfam": "IPv4", 00:18:05.689 "traddr": "10.0.0.2", 00:18:05.689 "trsvcid": "4420" 00:18:05.689 }, 00:18:05.689 "peer_address": { 00:18:05.689 "trtype": "TCP", 00:18:05.689 "adrfam": "IPv4", 00:18:05.689 "traddr": "10.0.0.1", 00:18:05.689 "trsvcid": "58362" 00:18:05.689 }, 00:18:05.689 "auth": { 00:18:05.689 "state": "completed", 00:18:05.689 "digest": "sha512", 00:18:05.689 "dhgroup": "ffdhe4096" 00:18:05.689 } 00:18:05.689 } 00:18:05.689 ]' 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.689 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.948 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.948 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.948 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.949 23:43:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.517 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.777 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.036 00:18:07.036 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.036 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.036 23:43:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.295 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.295 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.295 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:07.295 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.296 { 00:18:07.296 "cntlid": 127, 00:18:07.296 "qid": 0, 00:18:07.296 "state": "enabled", 00:18:07.296 "thread": "nvmf_tgt_poll_group_000", 00:18:07.296 "listen_address": { 00:18:07.296 "trtype": "TCP", 00:18:07.296 "adrfam": "IPv4", 00:18:07.296 "traddr": "10.0.0.2", 00:18:07.296 "trsvcid": "4420" 00:18:07.296 }, 00:18:07.296 "peer_address": { 00:18:07.296 "trtype": "TCP", 00:18:07.296 "adrfam": "IPv4", 00:18:07.296 "traddr": "10.0.0.1", 00:18:07.296 "trsvcid": "58380" 00:18:07.296 }, 00:18:07.296 "auth": { 00:18:07.296 "state": "completed", 00:18:07.296 "digest": "sha512", 00:18:07.296 "dhgroup": "ffdhe4096" 00:18:07.296 } 00:18:07.296 } 00:18:07.296 ]' 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.296 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.555 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.122 23:43:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.122 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.690 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:08.690 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.690 { 00:18:08.690 "cntlid": 129, 00:18:08.690 "qid": 0, 00:18:08.690 "state": "enabled", 00:18:08.690 "thread": "nvmf_tgt_poll_group_000", 00:18:08.690 "listen_address": { 00:18:08.690 "trtype": "TCP", 00:18:08.690 "adrfam": "IPv4", 00:18:08.690 "traddr": "10.0.0.2", 00:18:08.690 "trsvcid": "4420" 00:18:08.690 }, 00:18:08.690 "peer_address": { 00:18:08.690 "trtype": "TCP", 00:18:08.690 "adrfam": "IPv4", 00:18:08.690 "traddr": "10.0.0.1", 00:18:08.690 "trsvcid": "44306" 00:18:08.691 }, 00:18:08.691 "auth": { 00:18:08.691 "state": "completed", 00:18:08.691 "digest": "sha512", 00:18:08.691 "dhgroup": "ffdhe6144" 00:18:08.691 } 00:18:08.691 } 00:18:08.691 ]' 00:18:08.691 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.691 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.691 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.950 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:08.950 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.950 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.950 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.950 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.950 23:43:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.518 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.777 23:43:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.036 00:18:10.295 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.295 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.295 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.296 { 00:18:10.296 "cntlid": 131, 00:18:10.296 "qid": 0, 00:18:10.296 "state": "enabled", 00:18:10.296 "thread": "nvmf_tgt_poll_group_000", 00:18:10.296 "listen_address": { 00:18:10.296 "trtype": "TCP", 00:18:10.296 "adrfam": "IPv4", 00:18:10.296 "traddr": "10.0.0.2", 00:18:10.296 "trsvcid": "4420" 00:18:10.296 }, 00:18:10.296 "peer_address": { 00:18:10.296 "trtype": "TCP", 00:18:10.296 "adrfam": "IPv4", 00:18:10.296 "traddr": "10.0.0.1", 00:18:10.296 "trsvcid": "44334" 00:18:10.296 }, 00:18:10.296 "auth": { 00:18:10.296 "state": "completed", 00:18:10.296 "digest": "sha512", 00:18:10.296 "dhgroup": "ffdhe6144" 00:18:10.296 } 00:18:10.296 } 00:18:10.296 ]' 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.296 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.555 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:10.555 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.555 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.555 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.555 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.555 23:43:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.125 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.384 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.643 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.902 { 00:18:11.902 "cntlid": 133, 00:18:11.902 "qid": 0, 00:18:11.902 "state": "enabled", 00:18:11.902 "thread": "nvmf_tgt_poll_group_000", 00:18:11.902 "listen_address": { 00:18:11.902 "trtype": "TCP", 00:18:11.902 "adrfam": "IPv4", 00:18:11.902 "traddr": "10.0.0.2", 00:18:11.902 "trsvcid": "4420" 00:18:11.902 }, 00:18:11.902 "peer_address": { 00:18:11.902 "trtype": "TCP", 00:18:11.902 "adrfam": "IPv4", 00:18:11.902 "traddr": "10.0.0.1", 00:18:11.902 "trsvcid": "44352" 00:18:11.902 }, 00:18:11.902 "auth": { 00:18:11.902 "state": "completed", 00:18:11.902 "digest": "sha512", 00:18:11.902 "dhgroup": "ffdhe6144" 00:18:11.902 } 00:18:11.902 } 00:18:11.902 ]' 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.902 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.162 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.162 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.162 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.162 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.162 23:44:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.162 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.729 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.988 23:44:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.247 00:18:13.247 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.247 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.247 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.506 { 00:18:13.506 "cntlid": 135, 00:18:13.506 "qid": 0, 00:18:13.506 "state": "enabled", 00:18:13.506 "thread": "nvmf_tgt_poll_group_000", 00:18:13.506 "listen_address": { 00:18:13.506 "trtype": "TCP", 00:18:13.506 "adrfam": "IPv4", 00:18:13.506 "traddr": "10.0.0.2", 00:18:13.506 "trsvcid": "4420" 00:18:13.506 }, 00:18:13.506 "peer_address": { 00:18:13.506 "trtype": "TCP", 00:18:13.506 "adrfam": "IPv4", 00:18:13.506 "traddr": "10.0.0.1", 00:18:13.506 "trsvcid": "44378" 00:18:13.506 }, 00:18:13.506 "auth": { 00:18:13.506 "state": "completed", 00:18:13.506 "digest": "sha512", 00:18:13.506 "dhgroup": "ffdhe6144" 00:18:13.506 } 00:18:13.506 } 00:18:13.506 ]' 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.506 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.802 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:13.802 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.802 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.802 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.802 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.802 23:44:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.401 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.660 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.228 00:18:15.228 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.228 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.228 23:44:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.228 { 00:18:15.228 "cntlid": 137, 00:18:15.228 "qid": 0, 00:18:15.228 "state": "enabled", 00:18:15.228 "thread": "nvmf_tgt_poll_group_000", 00:18:15.228 "listen_address": { 00:18:15.228 "trtype": "TCP", 00:18:15.228 "adrfam": "IPv4", 00:18:15.228 "traddr": "10.0.0.2", 00:18:15.228 "trsvcid": "4420" 00:18:15.228 }, 00:18:15.228 "peer_address": { 00:18:15.228 "trtype": "TCP", 00:18:15.228 "adrfam": "IPv4", 00:18:15.228 "traddr": "10.0.0.1", 00:18:15.228 "trsvcid": "44388" 00:18:15.228 }, 00:18:15.228 "auth": { 00:18:15.228 "state": "completed", 00:18:15.228 "digest": "sha512", 00:18:15.228 "dhgroup": "ffdhe8192" 00:18:15.228 } 00:18:15.228 } 00:18:15.228 ]' 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.228 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.487 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.487 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.487 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.487 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.487 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.487 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:18:16.055 23:44:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.055 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:16.055 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:16.055 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.055 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.314 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.881 00:18:16.881 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.881 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.881 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.140 { 00:18:17.140 "cntlid": 139, 00:18:17.140 "qid": 0, 00:18:17.140 "state": "enabled", 00:18:17.140 "thread": "nvmf_tgt_poll_group_000", 00:18:17.140 "listen_address": { 00:18:17.140 "trtype": "TCP", 00:18:17.140 "adrfam": "IPv4", 00:18:17.140 "traddr": "10.0.0.2", 00:18:17.140 "trsvcid": "4420" 00:18:17.140 }, 00:18:17.140 "peer_address": { 00:18:17.140 "trtype": "TCP", 00:18:17.140 "adrfam": "IPv4", 00:18:17.140 "traddr": "10.0.0.1", 00:18:17.140 "trsvcid": "44408" 00:18:17.140 }, 00:18:17.140 "auth": { 00:18:17.140 "state": "completed", 00:18:17.140 "digest": "sha512", 00:18:17.140 "dhgroup": "ffdhe8192" 00:18:17.140 } 00:18:17.140 } 00:18:17.140 ]' 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:17.140 23:44:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.140 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.140 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.140 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.399 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:NGYyYTZiMjNhOTE5MjU0NTgzY2Y4YmZhNDhmNTYwYzlEzhoT: --dhchap-ctrl-secret DHHC-1:02:MTE1NjhlNjkzM2UxN2Q5N2EwNDJiMmVmMjczNzdiNjYyZjgzYWE2MTJjZjMzMTlmLfDFtg==: 00:18:17.966 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.966 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.966 23:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:17.966 23:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.966 23:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:17.966 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.966 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.967 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.225 23:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:18.225 23:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.225 23:44:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:18.225 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.225 23:44:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.484 00:18:18.484 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.484 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.484 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.744 { 00:18:18.744 "cntlid": 141, 00:18:18.744 "qid": 0, 00:18:18.744 "state": "enabled", 00:18:18.744 "thread": "nvmf_tgt_poll_group_000", 00:18:18.744 "listen_address": { 00:18:18.744 "trtype": "TCP", 00:18:18.744 "adrfam": "IPv4", 00:18:18.744 "traddr": "10.0.0.2", 00:18:18.744 "trsvcid": "4420" 00:18:18.744 }, 00:18:18.744 "peer_address": { 00:18:18.744 "trtype": "TCP", 00:18:18.744 "adrfam": "IPv4", 00:18:18.744 "traddr": "10.0.0.1", 00:18:18.744 "trsvcid": "55230" 00:18:18.744 }, 00:18:18.744 "auth": { 00:18:18.744 "state": "completed", 00:18:18.744 "digest": "sha512", 00:18:18.744 "dhgroup": "ffdhe8192" 00:18:18.744 } 00:18:18.744 } 00:18:18.744 ]' 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.744 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.004 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.004 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.004 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.004 23:44:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MWY1MDdjYjg2NDI1OTBkYzhhMTRkM2Q0MzNlZmU5MzA2NDgzZjEzZjZiNWViYjgw0+Vq0w==: --dhchap-ctrl-secret DHHC-1:01:MDk4MGJlNzEyODgyNDZjZmM3MDUzZWJkZDQ0MzAwMGblF3cu: 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.572 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:19.830 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:19.830 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.830 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:19.830 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:19.831 23:44:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.399 00:18:20.399 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.399 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.399 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.399 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.399 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.399 23:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:20.399 23:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.659 { 00:18:20.659 "cntlid": 143, 00:18:20.659 "qid": 0, 00:18:20.659 "state": "enabled", 00:18:20.659 "thread": "nvmf_tgt_poll_group_000", 00:18:20.659 "listen_address": { 00:18:20.659 "trtype": "TCP", 00:18:20.659 "adrfam": "IPv4", 00:18:20.659 "traddr": "10.0.0.2", 00:18:20.659 "trsvcid": "4420" 00:18:20.659 }, 00:18:20.659 "peer_address": { 00:18:20.659 "trtype": "TCP", 00:18:20.659 "adrfam": "IPv4", 00:18:20.659 "traddr": "10.0.0.1", 00:18:20.659 "trsvcid": "55250" 00:18:20.659 }, 00:18:20.659 "auth": { 00:18:20.659 "state": "completed", 00:18:20.659 "digest": "sha512", 00:18:20.659 "dhgroup": "ffdhe8192" 00:18:20.659 } 00:18:20.659 } 00:18:20.659 ]' 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.659 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.918 23:44:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.484 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.742 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.000 00:18:22.000 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.000 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.000 23:44:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.259 { 00:18:22.259 "cntlid": 145, 00:18:22.259 "qid": 0, 00:18:22.259 "state": "enabled", 00:18:22.259 "thread": "nvmf_tgt_poll_group_000", 00:18:22.259 "listen_address": { 00:18:22.259 "trtype": "TCP", 00:18:22.259 "adrfam": "IPv4", 00:18:22.259 "traddr": "10.0.0.2", 00:18:22.259 "trsvcid": "4420" 00:18:22.259 }, 00:18:22.259 "peer_address": { 00:18:22.259 "trtype": "TCP", 00:18:22.259 "adrfam": "IPv4", 00:18:22.259 "traddr": "10.0.0.1", 00:18:22.259 "trsvcid": "55282" 00:18:22.259 }, 00:18:22.259 "auth": { 00:18:22.259 "state": "completed", 00:18:22.259 "digest": "sha512", 00:18:22.259 "dhgroup": "ffdhe8192" 00:18:22.259 } 00:18:22.259 } 00:18:22.259 ]' 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.259 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.532 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.532 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.532 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.532 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.532 23:44:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:YmVlYjk5ZTczZTYxNzAwM2U1Zjg5YzcwNjM4MWZhNmU4ZDZiOGM0NGJiZWM5N2JiRFMTcg==: --dhchap-ctrl-secret DHHC-1:03:MDJjNjIzMDliMmI5MTExOTNlYTYzOGQ2MmI0Mjg1NjI2ZGVmZjAyMzdlZTk2ZTE1YTMzZDllZjc5YWRhNmU0OV0R71Q=: 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:23.098 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:23.099 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:23.099 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:23.099 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.099 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:23.665 request: 00:18:23.665 { 00:18:23.665 "name": "nvme0", 00:18:23.665 "trtype": "tcp", 00:18:23.665 "traddr": "10.0.0.2", 00:18:23.665 "adrfam": "ipv4", 00:18:23.665 "trsvcid": "4420", 00:18:23.665 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.665 "prchk_reftag": false, 00:18:23.665 "prchk_guard": false, 00:18:23.665 "hdgst": false, 00:18:23.665 "ddgst": false, 00:18:23.665 "dhchap_key": "key2", 00:18:23.665 "method": "bdev_nvme_attach_controller", 00:18:23.665 "req_id": 1 00:18:23.665 } 00:18:23.665 Got JSON-RPC error response 00:18:23.665 response: 00:18:23.665 { 00:18:23.665 "code": -5, 00:18:23.665 "message": "Input/output error" 00:18:23.665 } 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:23.665 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:24.232 request: 00:18:24.232 { 00:18:24.232 "name": "nvme0", 00:18:24.232 "trtype": "tcp", 00:18:24.232 "traddr": "10.0.0.2", 00:18:24.232 "adrfam": "ipv4", 00:18:24.232 "trsvcid": "4420", 00:18:24.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:24.232 "prchk_reftag": false, 00:18:24.232 "prchk_guard": false, 00:18:24.232 "hdgst": false, 00:18:24.232 "ddgst": false, 00:18:24.232 "dhchap_key": "key1", 00:18:24.232 "dhchap_ctrlr_key": "ckey2", 00:18:24.232 "method": "bdev_nvme_attach_controller", 00:18:24.232 "req_id": 1 00:18:24.232 } 00:18:24.232 Got JSON-RPC error response 00:18:24.232 response: 00:18:24.232 { 00:18:24.232 "code": -5, 00:18:24.232 "message": "Input/output error" 00:18:24.232 } 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.232 23:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.489 request: 00:18:24.489 { 00:18:24.489 "name": "nvme0", 00:18:24.489 "trtype": "tcp", 00:18:24.489 "traddr": "10.0.0.2", 00:18:24.489 "adrfam": "ipv4", 00:18:24.489 "trsvcid": "4420", 00:18:24.489 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:24.489 "prchk_reftag": false, 00:18:24.489 "prchk_guard": false, 00:18:24.489 "hdgst": false, 00:18:24.489 "ddgst": false, 00:18:24.489 "dhchap_key": "key1", 00:18:24.489 "dhchap_ctrlr_key": "ckey1", 00:18:24.489 "method": "bdev_nvme_attach_controller", 00:18:24.489 "req_id": 1 00:18:24.489 } 00:18:24.489 Got JSON-RPC error response 00:18:24.489 response: 00:18:24.489 { 00:18:24.489 "code": -5, 00:18:24.489 "message": "Input/output error" 00:18:24.489 } 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1002189 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1002189 ']' 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1002189 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1002189 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1002189' 00:18:24.489 killing process with pid 1002189 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1002189 00:18:24.489 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1002189 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1022773 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1022773 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1022773 ']' 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:24.746 23:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1022773 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1022773 ']' 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:25.681 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.940 23:44:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:26.508 00:18:26.508 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.508 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.508 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.766 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.766 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.766 23:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:26.766 23:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.767 { 00:18:26.767 "cntlid": 1, 00:18:26.767 "qid": 0, 00:18:26.767 "state": "enabled", 00:18:26.767 "thread": "nvmf_tgt_poll_group_000", 00:18:26.767 "listen_address": { 00:18:26.767 "trtype": "TCP", 00:18:26.767 "adrfam": "IPv4", 00:18:26.767 "traddr": "10.0.0.2", 00:18:26.767 "trsvcid": "4420" 00:18:26.767 }, 00:18:26.767 "peer_address": { 00:18:26.767 "trtype": "TCP", 00:18:26.767 "adrfam": "IPv4", 00:18:26.767 "traddr": "10.0.0.1", 00:18:26.767 "trsvcid": "55358" 00:18:26.767 }, 00:18:26.767 "auth": { 00:18:26.767 "state": "completed", 00:18:26.767 "digest": "sha512", 00:18:26.767 "dhgroup": "ffdhe8192" 00:18:26.767 } 00:18:26.767 } 00:18:26.767 ]' 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.767 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.026 23:44:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:MWIzZDkxODZlZjhhNGRlN2RjMzA4YmE1MzQ2NTZkN2FhMzBiOWVmMGUzOGU3NmZiYTRiMTM1NzE4MTJiYWEzNQoVmO8=: 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.595 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:27.855 request: 00:18:27.855 { 00:18:27.855 "name": "nvme0", 00:18:27.855 "trtype": "tcp", 00:18:27.855 "traddr": "10.0.0.2", 00:18:27.855 "adrfam": "ipv4", 00:18:27.855 "trsvcid": "4420", 00:18:27.855 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:27.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:27.855 "prchk_reftag": false, 00:18:27.855 "prchk_guard": false, 00:18:27.855 "hdgst": false, 00:18:27.855 "ddgst": false, 00:18:27.855 "dhchap_key": "key3", 00:18:27.855 "method": "bdev_nvme_attach_controller", 00:18:27.855 "req_id": 1 00:18:27.855 } 00:18:27.855 Got JSON-RPC error response 00:18:27.855 response: 00:18:27.855 { 00:18:27.855 "code": -5, 00:18:27.855 "message": "Input/output error" 00:18:27.855 } 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:27.855 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.114 23:44:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.373 request: 00:18:28.373 { 00:18:28.373 "name": "nvme0", 00:18:28.373 "trtype": "tcp", 00:18:28.373 "traddr": "10.0.0.2", 00:18:28.373 "adrfam": "ipv4", 00:18:28.373 "trsvcid": "4420", 00:18:28.373 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:28.373 "prchk_reftag": false, 00:18:28.373 "prchk_guard": false, 00:18:28.373 "hdgst": false, 00:18:28.373 "ddgst": false, 00:18:28.373 "dhchap_key": "key3", 00:18:28.373 "method": "bdev_nvme_attach_controller", 00:18:28.373 "req_id": 1 00:18:28.373 } 00:18:28.373 Got JSON-RPC error response 00:18:28.373 response: 00:18:28.373 { 00:18:28.373 "code": -5, 00:18:28.373 "message": "Input/output error" 00:18:28.373 } 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.373 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:28.632 request: 00:18:28.632 { 00:18:28.632 "name": "nvme0", 00:18:28.632 "trtype": "tcp", 00:18:28.632 "traddr": "10.0.0.2", 00:18:28.632 "adrfam": "ipv4", 00:18:28.632 "trsvcid": "4420", 00:18:28.632 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:28.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:28.632 "prchk_reftag": false, 00:18:28.632 "prchk_guard": false, 00:18:28.632 "hdgst": false, 00:18:28.632 "ddgst": false, 00:18:28.632 "dhchap_key": "key0", 00:18:28.632 "dhchap_ctrlr_key": "key1", 00:18:28.632 "method": "bdev_nvme_attach_controller", 00:18:28.632 "req_id": 1 00:18:28.632 } 00:18:28.632 Got JSON-RPC error response 00:18:28.632 response: 00:18:28.633 { 00:18:28.633 "code": -5, 00:18:28.633 "message": "Input/output error" 00:18:28.633 } 00:18:28.633 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:28.633 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:28.633 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:28.633 23:44:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:28.633 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.633 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:28.892 00:18:28.892 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:28.892 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:28.892 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.151 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.151 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.151 23:44:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1002431 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1002431 ']' 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1002431 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1002431 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1002431' 00:18:29.151 killing process with pid 1002431 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1002431 00:18:29.151 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1002431 00:18:29.720 23:44:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:29.720 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.720 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.721 rmmod nvme_tcp 00:18:29.721 rmmod nvme_fabrics 00:18:29.721 rmmod nvme_keyring 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1022773 ']' 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1022773 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1022773 ']' 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1022773 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1022773 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1022773' 00:18:29.721 killing process with pid 1022773 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1022773 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1022773 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.721 23:44:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.298 23:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.298 23:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CVF /tmp/spdk.key-sha256.Rr4 /tmp/spdk.key-sha384.yQ8 /tmp/spdk.key-sha512.Tp7 /tmp/spdk.key-sha512.hbh /tmp/spdk.key-sha384.Qwf /tmp/spdk.key-sha256.O7f '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:32.298 00:18:32.298 real 2m10.971s 00:18:32.298 user 5m0.891s 00:18:32.298 sys 0m20.384s 00:18:32.298 23:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:32.298 23:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.298 ************************************ 00:18:32.298 END TEST nvmf_auth_target 00:18:32.298 ************************************ 00:18:32.298 23:44:20 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:18:32.298 23:44:20 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:32.299 23:44:20 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:32.299 23:44:20 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:18:32.299 23:44:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:32.299 23:44:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:32.299 ************************************ 00:18:32.299 START TEST nvmf_bdevio_no_huge 00:18:32.299 ************************************ 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:32.299 * Looking for test storage... 00:18:32.299 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.299 23:44:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.576 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:37.577 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:37.577 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:37.577 Found net devices under 0000:86:00.0: cvl_0_0 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:37.577 Found net devices under 0000:86:00.1: cvl_0_1 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.577 23:44:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:18:37.577 00:18:37.577 --- 10.0.0.2 ping statistics --- 00:18:37.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.577 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:18:37.577 00:18:37.577 --- 10.0.0.1 ping statistics --- 00:18:37.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.577 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1027031 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1027031 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@823 -- # '[' -z 1027031 ']' 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:37.577 23:44:26 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.577 [2024-07-15 23:44:26.298550] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:18:37.577 [2024-07-15 23:44:26.298599] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:37.577 [2024-07-15 23:44:26.362880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.577 [2024-07-15 23:44:26.446706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.577 [2024-07-15 23:44:26.446747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.577 [2024-07-15 23:44:26.446754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.577 [2024-07-15 23:44:26.446760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.577 [2024-07-15 23:44:26.446765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.577 [2024-07-15 23:44:26.446898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:37.577 [2024-07-15 23:44:26.447008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:37.577 [2024-07-15 23:44:26.447116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.577 [2024-07-15 23:44:26.447117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:38.146 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:38.146 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # return 0 00:18:38.146 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:38.146 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:38.146 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.405 [2024-07-15 23:44:27.146809] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.405 Malloc0 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.405 [2024-07-15 23:44:27.191087] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:38.405 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:38.405 { 00:18:38.406 "params": { 00:18:38.406 "name": "Nvme$subsystem", 00:18:38.406 "trtype": "$TEST_TRANSPORT", 00:18:38.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:38.406 "adrfam": "ipv4", 00:18:38.406 "trsvcid": "$NVMF_PORT", 00:18:38.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:38.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:38.406 "hdgst": ${hdgst:-false}, 00:18:38.406 "ddgst": ${ddgst:-false} 00:18:38.406 }, 00:18:38.406 "method": "bdev_nvme_attach_controller" 00:18:38.406 } 00:18:38.406 EOF 00:18:38.406 )") 00:18:38.406 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:38.406 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:38.406 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:38.406 23:44:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:38.406 "params": { 00:18:38.406 "name": "Nvme1", 00:18:38.406 "trtype": "tcp", 00:18:38.406 "traddr": "10.0.0.2", 00:18:38.406 "adrfam": "ipv4", 00:18:38.406 "trsvcid": "4420", 00:18:38.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.406 "hdgst": false, 00:18:38.406 "ddgst": false 00:18:38.406 }, 00:18:38.406 "method": "bdev_nvme_attach_controller" 00:18:38.406 }' 00:18:38.406 [2024-07-15 23:44:27.240256] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:18:38.406 [2024-07-15 23:44:27.240303] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1027280 ] 00:18:38.406 [2024-07-15 23:44:27.298066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:38.665 [2024-07-15 23:44:27.384117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.665 [2024-07-15 23:44:27.384215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.665 [2024-07-15 23:44:27.384216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.665 I/O targets: 00:18:38.665 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:38.665 00:18:38.665 00:18:38.665 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.665 http://cunit.sourceforge.net/ 00:18:38.665 00:18:38.665 00:18:38.665 Suite: bdevio tests on: Nvme1n1 00:18:38.665 Test: blockdev write read block ...passed 00:18:38.665 Test: blockdev write zeroes read block ...passed 00:18:38.665 Test: blockdev write zeroes read no split ...passed 00:18:38.924 Test: blockdev write zeroes read split ...passed 00:18:38.924 Test: blockdev write zeroes read split partial ...passed 00:18:38.924 Test: blockdev reset ...[2024-07-15 23:44:27.741530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.924 [2024-07-15 23:44:27.741592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138a300 (9): Bad file descriptor 00:18:38.924 [2024-07-15 23:44:27.839236] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.924 passed 00:18:38.924 Test: blockdev write read 8 blocks ...passed 00:18:38.924 Test: blockdev write read size > 128k ...passed 00:18:38.924 Test: blockdev write read invalid size ...passed 00:18:38.924 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:38.924 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:38.924 Test: blockdev write read max offset ...passed 00:18:39.184 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:39.184 Test: blockdev writev readv 8 blocks ...passed 00:18:39.184 Test: blockdev writev readv 30 x 1block ...passed 00:18:39.184 Test: blockdev writev readv block ...passed 00:18:39.184 Test: blockdev writev readv size > 128k ...passed 00:18:39.184 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:39.184 Test: blockdev comparev and writev ...[2024-07-15 23:44:28.052059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.052088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.052101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.052110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.052399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.052409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.052421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.052428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.052728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.052739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.052746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.053033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.053042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.053054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:39.184 [2024-07-15 23:44:28.053061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:39.184 passed 00:18:39.184 Test: blockdev nvme passthru rw ...passed 00:18:39.184 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:44:28.135778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.184 [2024-07-15 23:44:28.135794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.135950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.184 [2024-07-15 23:44:28.135960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.136113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.184 [2024-07-15 23:44:28.136123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:39.184 [2024-07-15 23:44:28.136275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.184 [2024-07-15 23:44:28.136285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:39.184 passed 00:18:39.184 Test: blockdev nvme admin passthru ...passed 00:18:39.443 Test: blockdev copy ...passed 00:18:39.444 00:18:39.444 Run Summary: Type Total Ran Passed Failed Inactive 00:18:39.444 suites 1 1 n/a 0 0 00:18:39.444 tests 23 23 23 0 0 00:18:39.444 asserts 152 152 152 0 n/a 00:18:39.444 00:18:39.444 Elapsed time = 1.329 seconds 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.702 rmmod nvme_tcp 00:18:39.702 rmmod nvme_fabrics 00:18:39.702 rmmod nvme_keyring 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1027031 ']' 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1027031 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@942 -- # '[' -z 1027031 ']' 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # kill -0 1027031 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # uname 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1027031 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1027031' 00:18:39.702 killing process with pid 1027031 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@961 -- # kill 1027031 00:18:39.702 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # wait 1027031 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.961 23:44:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.499 23:44:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.499 00:18:42.499 real 0m10.138s 00:18:42.499 user 0m13.192s 00:18:42.499 sys 0m4.897s 00:18:42.499 23:44:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:42.499 23:44:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:42.499 ************************************ 00:18:42.499 END TEST nvmf_bdevio_no_huge 00:18:42.499 ************************************ 00:18:42.499 23:44:30 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:18:42.499 23:44:30 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:42.499 23:44:30 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:18:42.499 23:44:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:42.499 23:44:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:42.499 ************************************ 00:18:42.499 START TEST nvmf_tls 00:18:42.499 ************************************ 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:42.499 * Looking for test storage... 00:18:42.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.499 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.500 23:44:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:47.776 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:47.776 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.776 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:47.777 Found net devices under 0000:86:00.0: cvl_0_0 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:47.777 Found net devices under 0000:86:00.1: cvl_0_1 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:18:47.777 00:18:47.777 --- 10.0.0.2 ping statistics --- 00:18:47.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.777 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:18:47.777 00:18:47.777 --- 10.0.0.1 ping statistics --- 00:18:47.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.777 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1031021 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1031021 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1031021 ']' 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:47.777 23:44:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.777 [2024-07-15 23:44:36.682661] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:18:47.777 [2024-07-15 23:44:36.682704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.777 [2024-07-15 23:44:36.741532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.036 [2024-07-15 23:44:36.812389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.036 [2024-07-15 23:44:36.812429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.036 [2024-07-15 23:44:36.812435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.036 [2024-07-15 23:44:36.812441] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.036 [2024-07-15 23:44:36.812446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.036 [2024-07-15 23:44:36.812465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:48.603 23:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:48.861 true 00:18:48.862 23:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:48.862 23:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.121 23:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:49.121 23:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:49.121 23:44:37 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:49.121 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.121 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:49.380 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:49.380 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:49.380 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:49.639 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:49.639 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.639 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:49.639 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:49.639 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.639 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:49.898 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:49.898 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:49.898 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:50.158 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.158 23:44:38 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:50.158 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:50.158 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:50.158 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:50.418 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:50.418 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.RN3CUi07sT 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:50.677 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.XuPGNy74mx 00:18:50.678 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.678 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.678 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.RN3CUi07sT 00:18:50.678 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.XuPGNy74mx 00:18:50.678 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:50.937 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:51.196 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.RN3CUi07sT 00:18:51.196 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.RN3CUi07sT 00:18:51.196 23:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:51.196 [2024-07-15 23:44:40.069015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.196 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:51.456 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.456 [2024-07-15 23:44:40.413897] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.456 [2024-07-15 23:44:40.414116] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.456 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.715 malloc0 00:18:51.716 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.975 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RN3CUi07sT 00:18:51.975 [2024-07-15 23:44:40.891484] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:51.975 23:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RN3CUi07sT 00:19:04.216 Initializing NVMe Controllers 00:19:04.216 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:04.216 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:04.216 Initialization complete. Launching workers. 00:19:04.216 ======================================================== 00:19:04.216 Latency(us) 00:19:04.216 Device Information : IOPS MiB/s Average min max 00:19:04.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16458.44 64.29 3889.00 814.49 4509.50 00:19:04.216 ======================================================== 00:19:04.216 Total : 16458.44 64.29 3889.00 814.49 4509.50 00:19:04.216 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RN3CUi07sT 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RN3CUi07sT' 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1033372 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1033372 /var/tmp/bdevperf.sock 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1033372 ']' 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.216 [2024-07-15 23:44:51.054459] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:04.216 [2024-07-15 23:44:51.054518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1033372 ] 00:19:04.216 [2024-07-15 23:44:51.104923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.216 [2024-07-15 23:44:51.183950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:04.216 23:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RN3CUi07sT 00:19:04.216 [2024-07-15 23:44:52.013965] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.216 [2024-07-15 23:44:52.014031] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:04.216 TLSTESTn1 00:19:04.216 23:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:04.216 Running I/O for 10 seconds... 00:19:14.186 00:19:14.186 Latency(us) 00:19:14.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.186 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:14.186 Verification LBA range: start 0x0 length 0x2000 00:19:14.186 TLSTESTn1 : 10.05 5275.76 20.61 0.00 0.00 24185.66 4729.99 62914.56 00:19:14.186 =================================================================================================================== 00:19:14.186 Total : 5275.76 20.61 0.00 0.00 24185.66 4729.99 62914.56 00:19:14.186 0 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1033372 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1033372 ']' 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1033372 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1033372 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1033372' 00:19:14.186 killing process with pid 1033372 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1033372 00:19:14.186 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.186 00:19:14.186 Latency(us) 00:19:14.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.186 =================================================================================================================== 00:19:14.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.186 [2024-07-15 23:45:02.332434] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1033372 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XuPGNy74mx 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XuPGNy74mx 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XuPGNy74mx 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XuPGNy74mx' 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1035342 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1035342 /var/tmp/bdevperf.sock 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1035342 ']' 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:14.186 23:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.186 [2024-07-15 23:45:02.561801] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:14.186 [2024-07-15 23:45:02.561849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035342 ] 00:19:14.186 [2024-07-15 23:45:02.611502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.186 [2024-07-15 23:45:02.683617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.446 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:14.446 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:14.446 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XuPGNy74mx 00:19:14.705 [2024-07-15 23:45:03.526237] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.705 [2024-07-15 23:45:03.526308] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.705 [2024-07-15 23:45:03.533248] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.705 [2024-07-15 23:45:03.533508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9570 (107): Transport endpoint is not connected 00:19:14.705 [2024-07-15 23:45:03.534501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9570 (9): Bad file descriptor 00:19:14.705 [2024-07-15 23:45:03.535502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.705 [2024-07-15 23:45:03.535513] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.705 [2024-07-15 23:45:03.535523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.705 request: 00:19:14.705 { 00:19:14.705 "name": "TLSTEST", 00:19:14.705 "trtype": "tcp", 00:19:14.705 "traddr": "10.0.0.2", 00:19:14.705 "adrfam": "ipv4", 00:19:14.705 "trsvcid": "4420", 00:19:14.705 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.705 "prchk_reftag": false, 00:19:14.705 "prchk_guard": false, 00:19:14.705 "hdgst": false, 00:19:14.705 "ddgst": false, 00:19:14.705 "psk": "/tmp/tmp.XuPGNy74mx", 00:19:14.705 "method": "bdev_nvme_attach_controller", 00:19:14.705 "req_id": 1 00:19:14.705 } 00:19:14.705 Got JSON-RPC error response 00:19:14.705 response: 00:19:14.705 { 00:19:14.705 "code": -5, 00:19:14.705 "message": "Input/output error" 00:19:14.705 } 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1035342 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1035342 ']' 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1035342 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1035342 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1035342' 00:19:14.705 killing process with pid 1035342 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1035342 00:19:14.705 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.705 00:19:14.705 Latency(us) 00:19:14.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.705 =================================================================================================================== 00:19:14.705 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.705 [2024-07-15 23:45:03.598057] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:14.705 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1035342 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RN3CUi07sT 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RN3CUi07sT 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RN3CUi07sT 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RN3CUi07sT' 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1035579 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1035579 /var/tmp/bdevperf.sock 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1035579 ']' 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:14.964 23:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.964 [2024-07-15 23:45:03.822649] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:14.964 [2024-07-15 23:45:03.822696] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035579 ] 00:19:14.964 [2024-07-15 23:45:03.872136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.222 [2024-07-15 23:45:03.950537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.790 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:15.790 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:15.790 23:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.RN3CUi07sT 00:19:16.049 [2024-07-15 23:45:04.784604] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.049 [2024-07-15 23:45:04.784668] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.049 [2024-07-15 23:45:04.796362] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:16.049 [2024-07-15 23:45:04.796387] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:16.049 [2024-07-15 23:45:04.796410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:16.049 [2024-07-15 23:45:04.796998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ece570 (107): Transport endpoint is not connected 00:19:16.049 [2024-07-15 23:45:04.797991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ece570 (9): Bad file descriptor 00:19:16.049 [2024-07-15 23:45:04.798992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.049 [2024-07-15 23:45:04.799002] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:16.049 [2024-07-15 23:45:04.799011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.049 request: 00:19:16.049 { 00:19:16.049 "name": "TLSTEST", 00:19:16.049 "trtype": "tcp", 00:19:16.049 "traddr": "10.0.0.2", 00:19:16.049 "adrfam": "ipv4", 00:19:16.049 "trsvcid": "4420", 00:19:16.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.049 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:16.049 "prchk_reftag": false, 00:19:16.049 "prchk_guard": false, 00:19:16.049 "hdgst": false, 00:19:16.049 "ddgst": false, 00:19:16.049 "psk": "/tmp/tmp.RN3CUi07sT", 00:19:16.049 "method": "bdev_nvme_attach_controller", 00:19:16.049 "req_id": 1 00:19:16.049 } 00:19:16.049 Got JSON-RPC error response 00:19:16.049 response: 00:19:16.049 { 00:19:16.049 "code": -5, 00:19:16.049 "message": "Input/output error" 00:19:16.049 } 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1035579 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1035579 ']' 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1035579 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1035579 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1035579' 00:19:16.049 killing process with pid 1035579 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1035579 00:19:16.049 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.049 00:19:16.049 Latency(us) 00:19:16.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.049 =================================================================================================================== 00:19:16.049 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.049 [2024-07-15 23:45:04.873476] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.049 23:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1035579 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RN3CUi07sT 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RN3CUi07sT 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:16.308 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RN3CUi07sT 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RN3CUi07sT' 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1035811 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1035811 /var/tmp/bdevperf.sock 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1035811 ']' 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:16.309 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.309 [2024-07-15 23:45:05.098783] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:16.309 [2024-07-15 23:45:05.098826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1035811 ] 00:19:16.309 [2024-07-15 23:45:05.150136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.309 [2024-07-15 23:45:05.218679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.246 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:17.246 23:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:17.246 23:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RN3CUi07sT 00:19:17.246 [2024-07-15 23:45:06.056851] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.246 [2024-07-15 23:45:06.056921] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:17.246 [2024-07-15 23:45:06.061634] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:17.246 [2024-07-15 23:45:06.061658] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:17.246 [2024-07-15 23:45:06.061685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:17.246 [2024-07-15 23:45:06.062357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1831570 (107): Transport endpoint is not connected 00:19:17.246 [2024-07-15 23:45:06.063348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1831570 (9): Bad file descriptor 00:19:17.246 [2024-07-15 23:45:06.064349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:17.246 [2024-07-15 23:45:06.064359] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:17.246 [2024-07-15 23:45:06.064369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:17.246 request: 00:19:17.246 { 00:19:17.246 "name": "TLSTEST", 00:19:17.246 "trtype": "tcp", 00:19:17.246 "traddr": "10.0.0.2", 00:19:17.246 "adrfam": "ipv4", 00:19:17.246 "trsvcid": "4420", 00:19:17.246 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:17.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.246 "prchk_reftag": false, 00:19:17.246 "prchk_guard": false, 00:19:17.246 "hdgst": false, 00:19:17.246 "ddgst": false, 00:19:17.246 "psk": "/tmp/tmp.RN3CUi07sT", 00:19:17.246 "method": "bdev_nvme_attach_controller", 00:19:17.246 "req_id": 1 00:19:17.246 } 00:19:17.246 Got JSON-RPC error response 00:19:17.246 response: 00:19:17.246 { 00:19:17.246 "code": -5, 00:19:17.246 "message": "Input/output error" 00:19:17.246 } 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1035811 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1035811 ']' 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1035811 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1035811 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1035811' 00:19:17.246 killing process with pid 1035811 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1035811 00:19:17.246 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.246 00:19:17.246 Latency(us) 00:19:17.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.246 =================================================================================================================== 00:19:17.246 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.246 [2024-07-15 23:45:06.126497] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:17.246 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1035811 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:17.505 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1036047 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1036047 /var/tmp/bdevperf.sock 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1036047 ']' 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:17.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:17.506 23:45:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.506 [2024-07-15 23:45:06.352625] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:17.506 [2024-07-15 23:45:06.352670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036047 ] 00:19:17.506 [2024-07-15 23:45:06.403032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.506 [2024-07-15 23:45:06.471759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:18.444 [2024-07-15 23:45:07.313651] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:18.444 [2024-07-15 23:45:07.315516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cc4af0 (9): Bad file descriptor 00:19:18.444 [2024-07-15 23:45:07.316514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.444 [2024-07-15 23:45:07.316526] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:18.444 [2024-07-15 23:45:07.316535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.444 request: 00:19:18.444 { 00:19:18.444 "name": "TLSTEST", 00:19:18.444 "trtype": "tcp", 00:19:18.444 "traddr": "10.0.0.2", 00:19:18.444 "adrfam": "ipv4", 00:19:18.444 "trsvcid": "4420", 00:19:18.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:18.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:18.444 "prchk_reftag": false, 00:19:18.444 "prchk_guard": false, 00:19:18.444 "hdgst": false, 00:19:18.444 "ddgst": false, 00:19:18.444 "method": "bdev_nvme_attach_controller", 00:19:18.444 "req_id": 1 00:19:18.444 } 00:19:18.444 Got JSON-RPC error response 00:19:18.444 response: 00:19:18.444 { 00:19:18.444 "code": -5, 00:19:18.444 "message": "Input/output error" 00:19:18.444 } 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1036047 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1036047 ']' 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1036047 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1036047 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1036047' 00:19:18.444 killing process with pid 1036047 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1036047 00:19:18.444 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.444 00:19:18.444 Latency(us) 00:19:18.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.444 =================================================================================================================== 00:19:18.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:18.444 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1036047 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1031021 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1031021 ']' 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1031021 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1031021 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1031021' 00:19:18.703 killing process with pid 1031021 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1031021 00:19:18.703 [2024-07-15 23:45:07.601862] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:18.703 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1031021 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Uq5xADRNMg 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Uq5xADRNMg 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1036302 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1036302 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1036302 ']' 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:18.963 23:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.963 [2024-07-15 23:45:07.903341] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:18.963 [2024-07-15 23:45:07.903388] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.223 [2024-07-15 23:45:07.959803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.223 [2024-07-15 23:45:08.033185] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.223 [2024-07-15 23:45:08.033223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.223 [2024-07-15 23:45:08.033235] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.223 [2024-07-15 23:45:08.033241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.223 [2024-07-15 23:45:08.033246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.223 [2024-07-15 23:45:08.033264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.791 23:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:19.791 23:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:19.791 23:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.791 23:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.791 23:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:19.792 23:45:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.792 23:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Uq5xADRNMg 00:19:19.792 23:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Uq5xADRNMg 00:19:19.792 23:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:20.050 [2024-07-15 23:45:08.889296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.051 23:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.309 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.310 [2024-07-15 23:45:09.222149] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.310 [2024-07-15 23:45:09.222331] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.310 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.569 malloc0 00:19:20.569 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uq5xADRNMg 00:19:20.828 [2024-07-15 23:45:09.711541] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uq5xADRNMg 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Uq5xADRNMg' 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1036913 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1036913 /var/tmp/bdevperf.sock 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1036913 ']' 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:20.828 23:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.828 [2024-07-15 23:45:09.776821] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:20.828 [2024-07-15 23:45:09.776870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036913 ] 00:19:21.086 [2024-07-15 23:45:09.828508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.086 [2024-07-15 23:45:09.902293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.654 23:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:21.654 23:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:21.654 23:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uq5xADRNMg 00:19:21.912 [2024-07-15 23:45:10.732916] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.912 [2024-07-15 23:45:10.732991] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:21.912 TLSTESTn1 00:19:21.912 23:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:22.171 Running I/O for 10 seconds... 00:19:32.192 00:19:32.192 Latency(us) 00:19:32.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.192 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:32.192 Verification LBA range: start 0x0 length 0x2000 00:19:32.192 TLSTESTn1 : 10.03 3173.96 12.40 0.00 0.00 40246.09 4872.46 67929.49 00:19:32.192 =================================================================================================================== 00:19:32.192 Total : 3173.96 12.40 0.00 0.00 40246.09 4872.46 67929.49 00:19:32.192 0 00:19:32.192 23:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:32.192 23:45:20 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1036913 00:19:32.192 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1036913 ']' 00:19:32.192 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1036913 00:19:32.192 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:32.192 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:32.192 23:45:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1036913 00:19:32.192 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:32.192 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:32.192 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1036913' 00:19:32.192 killing process with pid 1036913 00:19:32.192 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1036913 00:19:32.192 Received shutdown signal, test time was about 10.000000 seconds 00:19:32.192 00:19:32.192 Latency(us) 00:19:32.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.192 =================================================================================================================== 00:19:32.192 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:32.192 [2024-07-15 23:45:21.009037] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:32.192 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1036913 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Uq5xADRNMg 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uq5xADRNMg 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uq5xADRNMg 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=run_bdevperf 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t run_bdevperf 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uq5xADRNMg 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Uq5xADRNMg' 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1038848 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1038848 /var/tmp/bdevperf.sock 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1038848 ']' 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:32.451 23:45:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.451 [2024-07-15 23:45:21.244513] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:32.451 [2024-07-15 23:45:21.244560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1038848 ] 00:19:32.451 [2024-07-15 23:45:21.295089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.451 [2024-07-15 23:45:21.371372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uq5xADRNMg 00:19:33.388 [2024-07-15 23:45:22.190194] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.388 [2024-07-15 23:45:22.190247] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:33.388 [2024-07-15 23:45:22.190255] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Uq5xADRNMg 00:19:33.388 request: 00:19:33.388 { 00:19:33.388 "name": "TLSTEST", 00:19:33.388 "trtype": "tcp", 00:19:33.388 "traddr": "10.0.0.2", 00:19:33.388 "adrfam": "ipv4", 00:19:33.388 "trsvcid": "4420", 00:19:33.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.388 "prchk_reftag": false, 00:19:33.388 "prchk_guard": false, 00:19:33.388 "hdgst": false, 00:19:33.388 "ddgst": false, 00:19:33.388 "psk": "/tmp/tmp.Uq5xADRNMg", 00:19:33.388 "method": "bdev_nvme_attach_controller", 00:19:33.388 "req_id": 1 00:19:33.388 } 00:19:33.388 Got JSON-RPC error response 00:19:33.388 response: 00:19:33.388 { 00:19:33.388 "code": -1, 00:19:33.388 "message": "Operation not permitted" 00:19:33.388 } 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1038848 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1038848 ']' 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1038848 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1038848 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1038848' 00:19:33.388 killing process with pid 1038848 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1038848 00:19:33.388 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.388 00:19:33.388 Latency(us) 00:19:33.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.388 =================================================================================================================== 00:19:33.388 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.388 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1038848 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1036302 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1036302 ']' 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1036302 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1036302 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1036302' 00:19:33.647 killing process with pid 1036302 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1036302 00:19:33.647 [2024-07-15 23:45:22.474525] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:33.647 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1036302 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1039134 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1039134 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1039134 ']' 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:33.906 23:45:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.906 [2024-07-15 23:45:22.723280] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:33.906 [2024-07-15 23:45:22.723328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.906 [2024-07-15 23:45:22.779283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.906 [2024-07-15 23:45:22.856326] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.906 [2024-07-15 23:45:22.856365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.906 [2024-07-15 23:45:22.856372] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.906 [2024-07-15 23:45:22.856382] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.906 [2024-07-15 23:45:22.856387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.906 [2024-07-15 23:45:22.856413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Uq5xADRNMg 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@642 -- # local es=0 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@644 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Uq5xADRNMg 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@630 -- # local arg=setup_nvmf_tgt 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # type -t setup_nvmf_tgt 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # setup_nvmf_tgt /tmp/tmp.Uq5xADRNMg 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Uq5xADRNMg 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:34.841 [2024-07-15 23:45:23.707988] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.841 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.100 23:45:23 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.100 [2024-07-15 23:45:24.056889] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.100 [2024-07-15 23:45:24.057077] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.358 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.358 malloc0 00:19:35.358 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.616 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uq5xADRNMg 00:19:35.616 [2024-07-15 23:45:24.582526] tcp.c:3603:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:35.616 [2024-07-15 23:45:24.582555] tcp.c:3689:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:35.616 [2024-07-15 23:45:24.582579] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:35.616 request: 00:19:35.616 { 00:19:35.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.616 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.616 "psk": "/tmp/tmp.Uq5xADRNMg", 00:19:35.616 "method": "nvmf_subsystem_add_host", 00:19:35.616 "req_id": 1 00:19:35.616 } 00:19:35.616 Got JSON-RPC error response 00:19:35.616 response: 00:19:35.616 { 00:19:35.616 "code": -32603, 00:19:35.616 "message": "Internal error" 00:19:35.616 } 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@645 -- # es=1 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1039134 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1039134 ']' 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1039134 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1039134 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1039134' 00:19:35.874 killing process with pid 1039134 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1039134 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1039134 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Uq5xADRNMg 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:35.874 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1039520 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1039520 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1039520 ']' 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:36.132 23:45:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.132 [2024-07-15 23:45:24.900035] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:36.132 [2024-07-15 23:45:24.900080] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.132 [2024-07-15 23:45:24.959067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.132 [2024-07-15 23:45:25.026951] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.132 [2024-07-15 23:45:25.026992] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.132 [2024-07-15 23:45:25.026998] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.132 [2024-07-15 23:45:25.027004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.132 [2024-07-15 23:45:25.027009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.132 [2024-07-15 23:45:25.027033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Uq5xADRNMg 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Uq5xADRNMg 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:37.065 [2024-07-15 23:45:25.878444] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.065 23:45:25 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.329 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.329 [2024-07-15 23:45:26.215319] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.329 [2024-07-15 23:45:26.215508] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.329 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.588 malloc0 00:19:37.588 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uq5xADRNMg 00:19:37.847 [2024-07-15 23:45:26.720881] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1039814 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1039814 /var/tmp/bdevperf.sock 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1039814 ']' 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:37.847 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.847 [2024-07-15 23:45:26.769018] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:37.847 [2024-07-15 23:45:26.769061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1039814 ] 00:19:38.105 [2024-07-15 23:45:26.821383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.105 [2024-07-15 23:45:26.894898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.105 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:38.105 23:45:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:38.105 23:45:26 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uq5xADRNMg 00:19:38.363 [2024-07-15 23:45:27.136254] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.363 [2024-07-15 23:45:27.136328] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:38.363 TLSTESTn1 00:19:38.363 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:38.622 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:38.622 "subsystems": [ 00:19:38.622 { 00:19:38.622 "subsystem": "keyring", 00:19:38.622 "config": [] 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "subsystem": "iobuf", 00:19:38.622 "config": [ 00:19:38.622 { 00:19:38.622 "method": "iobuf_set_options", 00:19:38.622 "params": { 00:19:38.622 "small_pool_count": 8192, 00:19:38.622 "large_pool_count": 1024, 00:19:38.622 "small_bufsize": 8192, 00:19:38.622 "large_bufsize": 135168 00:19:38.622 } 00:19:38.622 } 00:19:38.622 ] 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "subsystem": "sock", 00:19:38.622 "config": [ 00:19:38.622 { 00:19:38.622 "method": "sock_set_default_impl", 00:19:38.622 "params": { 00:19:38.622 "impl_name": "posix" 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "sock_impl_set_options", 00:19:38.622 "params": { 00:19:38.622 "impl_name": "ssl", 00:19:38.622 "recv_buf_size": 4096, 00:19:38.622 "send_buf_size": 4096, 00:19:38.622 "enable_recv_pipe": true, 00:19:38.622 "enable_quickack": false, 00:19:38.622 "enable_placement_id": 0, 00:19:38.622 "enable_zerocopy_send_server": true, 00:19:38.622 "enable_zerocopy_send_client": false, 00:19:38.622 "zerocopy_threshold": 0, 00:19:38.622 "tls_version": 0, 00:19:38.622 "enable_ktls": false 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "sock_impl_set_options", 00:19:38.622 "params": { 00:19:38.622 "impl_name": "posix", 00:19:38.622 "recv_buf_size": 2097152, 00:19:38.622 "send_buf_size": 2097152, 00:19:38.622 "enable_recv_pipe": true, 00:19:38.622 "enable_quickack": false, 00:19:38.622 "enable_placement_id": 0, 00:19:38.622 "enable_zerocopy_send_server": true, 00:19:38.622 "enable_zerocopy_send_client": false, 00:19:38.622 "zerocopy_threshold": 0, 00:19:38.622 "tls_version": 0, 00:19:38.622 "enable_ktls": false 00:19:38.622 } 00:19:38.622 } 00:19:38.622 ] 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "subsystem": "vmd", 00:19:38.622 "config": [] 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "subsystem": "accel", 00:19:38.622 "config": [ 00:19:38.622 { 00:19:38.622 "method": "accel_set_options", 00:19:38.622 "params": { 00:19:38.622 "small_cache_size": 128, 00:19:38.622 "large_cache_size": 16, 00:19:38.622 "task_count": 2048, 00:19:38.622 "sequence_count": 2048, 00:19:38.622 "buf_count": 2048 00:19:38.622 } 00:19:38.622 } 00:19:38.622 ] 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "subsystem": "bdev", 00:19:38.622 "config": [ 00:19:38.622 { 00:19:38.622 "method": "bdev_set_options", 00:19:38.622 "params": { 00:19:38.622 "bdev_io_pool_size": 65535, 00:19:38.622 "bdev_io_cache_size": 256, 00:19:38.622 "bdev_auto_examine": true, 00:19:38.622 "iobuf_small_cache_size": 128, 00:19:38.622 "iobuf_large_cache_size": 16 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "bdev_raid_set_options", 00:19:38.622 "params": { 00:19:38.622 "process_window_size_kb": 1024 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "bdev_iscsi_set_options", 00:19:38.622 "params": { 00:19:38.622 "timeout_sec": 30 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "bdev_nvme_set_options", 00:19:38.622 "params": { 00:19:38.622 "action_on_timeout": "none", 00:19:38.622 "timeout_us": 0, 00:19:38.622 "timeout_admin_us": 0, 00:19:38.622 "keep_alive_timeout_ms": 10000, 00:19:38.622 "arbitration_burst": 0, 00:19:38.622 "low_priority_weight": 0, 00:19:38.622 "medium_priority_weight": 0, 00:19:38.622 "high_priority_weight": 0, 00:19:38.622 "nvme_adminq_poll_period_us": 10000, 00:19:38.622 "nvme_ioq_poll_period_us": 0, 00:19:38.622 "io_queue_requests": 0, 00:19:38.622 "delay_cmd_submit": true, 00:19:38.622 "transport_retry_count": 4, 00:19:38.622 "bdev_retry_count": 3, 00:19:38.622 "transport_ack_timeout": 0, 00:19:38.622 "ctrlr_loss_timeout_sec": 0, 00:19:38.622 "reconnect_delay_sec": 0, 00:19:38.622 "fast_io_fail_timeout_sec": 0, 00:19:38.622 "disable_auto_failback": false, 00:19:38.622 "generate_uuids": false, 00:19:38.622 "transport_tos": 0, 00:19:38.622 "nvme_error_stat": false, 00:19:38.622 "rdma_srq_size": 0, 00:19:38.622 "io_path_stat": false, 00:19:38.622 "allow_accel_sequence": false, 00:19:38.622 "rdma_max_cq_size": 0, 00:19:38.622 "rdma_cm_event_timeout_ms": 0, 00:19:38.622 "dhchap_digests": [ 00:19:38.622 "sha256", 00:19:38.622 "sha384", 00:19:38.622 "sha512" 00:19:38.622 ], 00:19:38.622 "dhchap_dhgroups": [ 00:19:38.622 "null", 00:19:38.622 "ffdhe2048", 00:19:38.622 "ffdhe3072", 00:19:38.622 "ffdhe4096", 00:19:38.622 "ffdhe6144", 00:19:38.622 "ffdhe8192" 00:19:38.622 ] 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "bdev_nvme_set_hotplug", 00:19:38.622 "params": { 00:19:38.622 "period_us": 100000, 00:19:38.622 "enable": false 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "bdev_malloc_create", 00:19:38.622 "params": { 00:19:38.622 "name": "malloc0", 00:19:38.622 "num_blocks": 8192, 00:19:38.622 "block_size": 4096, 00:19:38.622 "physical_block_size": 4096, 00:19:38.622 "uuid": "cfa9ff6f-a014-42c9-b8e2-fbfdc07eeff2", 00:19:38.622 "optimal_io_boundary": 0 00:19:38.622 } 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "method": "bdev_wait_for_examine" 00:19:38.622 } 00:19:38.622 ] 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "subsystem": "nbd", 00:19:38.622 "config": [] 00:19:38.622 }, 00:19:38.622 { 00:19:38.622 "subsystem": "scheduler", 00:19:38.622 "config": [ 00:19:38.622 { 00:19:38.622 "method": "framework_set_scheduler", 00:19:38.622 "params": { 00:19:38.622 "name": "static" 00:19:38.623 } 00:19:38.623 } 00:19:38.623 ] 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "subsystem": "nvmf", 00:19:38.623 "config": [ 00:19:38.623 { 00:19:38.623 "method": "nvmf_set_config", 00:19:38.623 "params": { 00:19:38.623 "discovery_filter": "match_any", 00:19:38.623 "admin_cmd_passthru": { 00:19:38.623 "identify_ctrlr": false 00:19:38.623 } 00:19:38.623 } 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "method": "nvmf_set_max_subsystems", 00:19:38.623 "params": { 00:19:38.623 "max_subsystems": 1024 00:19:38.623 } 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "method": "nvmf_set_crdt", 00:19:38.623 "params": { 00:19:38.623 "crdt1": 0, 00:19:38.623 "crdt2": 0, 00:19:38.623 "crdt3": 0 00:19:38.623 } 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "method": "nvmf_create_transport", 00:19:38.623 "params": { 00:19:38.623 "trtype": "TCP", 00:19:38.623 "max_queue_depth": 128, 00:19:38.623 "max_io_qpairs_per_ctrlr": 127, 00:19:38.623 "in_capsule_data_size": 4096, 00:19:38.623 "max_io_size": 131072, 00:19:38.623 "io_unit_size": 131072, 00:19:38.623 "max_aq_depth": 128, 00:19:38.623 "num_shared_buffers": 511, 00:19:38.623 "buf_cache_size": 4294967295, 00:19:38.623 "dif_insert_or_strip": false, 00:19:38.623 "zcopy": false, 00:19:38.623 "c2h_success": false, 00:19:38.623 "sock_priority": 0, 00:19:38.623 "abort_timeout_sec": 1, 00:19:38.623 "ack_timeout": 0, 00:19:38.623 "data_wr_pool_size": 0 00:19:38.623 } 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "method": "nvmf_create_subsystem", 00:19:38.623 "params": { 00:19:38.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.623 "allow_any_host": false, 00:19:38.623 "serial_number": "SPDK00000000000001", 00:19:38.623 "model_number": "SPDK bdev Controller", 00:19:38.623 "max_namespaces": 10, 00:19:38.623 "min_cntlid": 1, 00:19:38.623 "max_cntlid": 65519, 00:19:38.623 "ana_reporting": false 00:19:38.623 } 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "method": "nvmf_subsystem_add_host", 00:19:38.623 "params": { 00:19:38.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.623 "host": "nqn.2016-06.io.spdk:host1", 00:19:38.623 "psk": "/tmp/tmp.Uq5xADRNMg" 00:19:38.623 } 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "method": "nvmf_subsystem_add_ns", 00:19:38.623 "params": { 00:19:38.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.623 "namespace": { 00:19:38.623 "nsid": 1, 00:19:38.623 "bdev_name": "malloc0", 00:19:38.623 "nguid": "CFA9FF6FA01442C9B8E2FBFDC07EEFF2", 00:19:38.623 "uuid": "cfa9ff6f-a014-42c9-b8e2-fbfdc07eeff2", 00:19:38.623 "no_auto_visible": false 00:19:38.623 } 00:19:38.623 } 00:19:38.623 }, 00:19:38.623 { 00:19:38.623 "method": "nvmf_subsystem_add_listener", 00:19:38.623 "params": { 00:19:38.623 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.623 "listen_address": { 00:19:38.623 "trtype": "TCP", 00:19:38.623 "adrfam": "IPv4", 00:19:38.623 "traddr": "10.0.0.2", 00:19:38.623 "trsvcid": "4420" 00:19:38.623 }, 00:19:38.623 "secure_channel": true 00:19:38.623 } 00:19:38.623 } 00:19:38.623 ] 00:19:38.623 } 00:19:38.623 ] 00:19:38.623 }' 00:19:38.623 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:38.881 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:38.881 "subsystems": [ 00:19:38.881 { 00:19:38.881 "subsystem": "keyring", 00:19:38.881 "config": [] 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "subsystem": "iobuf", 00:19:38.881 "config": [ 00:19:38.881 { 00:19:38.881 "method": "iobuf_set_options", 00:19:38.881 "params": { 00:19:38.881 "small_pool_count": 8192, 00:19:38.881 "large_pool_count": 1024, 00:19:38.881 "small_bufsize": 8192, 00:19:38.881 "large_bufsize": 135168 00:19:38.881 } 00:19:38.881 } 00:19:38.881 ] 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "subsystem": "sock", 00:19:38.881 "config": [ 00:19:38.881 { 00:19:38.881 "method": "sock_set_default_impl", 00:19:38.881 "params": { 00:19:38.881 "impl_name": "posix" 00:19:38.881 } 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "method": "sock_impl_set_options", 00:19:38.881 "params": { 00:19:38.881 "impl_name": "ssl", 00:19:38.881 "recv_buf_size": 4096, 00:19:38.881 "send_buf_size": 4096, 00:19:38.881 "enable_recv_pipe": true, 00:19:38.881 "enable_quickack": false, 00:19:38.881 "enable_placement_id": 0, 00:19:38.881 "enable_zerocopy_send_server": true, 00:19:38.881 "enable_zerocopy_send_client": false, 00:19:38.881 "zerocopy_threshold": 0, 00:19:38.881 "tls_version": 0, 00:19:38.881 "enable_ktls": false 00:19:38.881 } 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "method": "sock_impl_set_options", 00:19:38.881 "params": { 00:19:38.881 "impl_name": "posix", 00:19:38.881 "recv_buf_size": 2097152, 00:19:38.881 "send_buf_size": 2097152, 00:19:38.881 "enable_recv_pipe": true, 00:19:38.881 "enable_quickack": false, 00:19:38.881 "enable_placement_id": 0, 00:19:38.881 "enable_zerocopy_send_server": true, 00:19:38.881 "enable_zerocopy_send_client": false, 00:19:38.881 "zerocopy_threshold": 0, 00:19:38.881 "tls_version": 0, 00:19:38.881 "enable_ktls": false 00:19:38.881 } 00:19:38.881 } 00:19:38.881 ] 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "subsystem": "vmd", 00:19:38.881 "config": [] 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "subsystem": "accel", 00:19:38.881 "config": [ 00:19:38.881 { 00:19:38.881 "method": "accel_set_options", 00:19:38.881 "params": { 00:19:38.881 "small_cache_size": 128, 00:19:38.881 "large_cache_size": 16, 00:19:38.881 "task_count": 2048, 00:19:38.881 "sequence_count": 2048, 00:19:38.881 "buf_count": 2048 00:19:38.881 } 00:19:38.881 } 00:19:38.881 ] 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "subsystem": "bdev", 00:19:38.881 "config": [ 00:19:38.881 { 00:19:38.881 "method": "bdev_set_options", 00:19:38.881 "params": { 00:19:38.881 "bdev_io_pool_size": 65535, 00:19:38.881 "bdev_io_cache_size": 256, 00:19:38.881 "bdev_auto_examine": true, 00:19:38.881 "iobuf_small_cache_size": 128, 00:19:38.881 "iobuf_large_cache_size": 16 00:19:38.881 } 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "method": "bdev_raid_set_options", 00:19:38.881 "params": { 00:19:38.881 "process_window_size_kb": 1024 00:19:38.881 } 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "method": "bdev_iscsi_set_options", 00:19:38.881 "params": { 00:19:38.881 "timeout_sec": 30 00:19:38.881 } 00:19:38.881 }, 00:19:38.881 { 00:19:38.881 "method": "bdev_nvme_set_options", 00:19:38.881 "params": { 00:19:38.881 "action_on_timeout": "none", 00:19:38.881 "timeout_us": 0, 00:19:38.881 "timeout_admin_us": 0, 00:19:38.881 "keep_alive_timeout_ms": 10000, 00:19:38.881 "arbitration_burst": 0, 00:19:38.881 "low_priority_weight": 0, 00:19:38.881 "medium_priority_weight": 0, 00:19:38.881 "high_priority_weight": 0, 00:19:38.881 "nvme_adminq_poll_period_us": 10000, 00:19:38.881 "nvme_ioq_poll_period_us": 0, 00:19:38.881 "io_queue_requests": 512, 00:19:38.881 "delay_cmd_submit": true, 00:19:38.881 "transport_retry_count": 4, 00:19:38.881 "bdev_retry_count": 3, 00:19:38.882 "transport_ack_timeout": 0, 00:19:38.882 "ctrlr_loss_timeout_sec": 0, 00:19:38.882 "reconnect_delay_sec": 0, 00:19:38.882 "fast_io_fail_timeout_sec": 0, 00:19:38.882 "disable_auto_failback": false, 00:19:38.882 "generate_uuids": false, 00:19:38.882 "transport_tos": 0, 00:19:38.882 "nvme_error_stat": false, 00:19:38.882 "rdma_srq_size": 0, 00:19:38.882 "io_path_stat": false, 00:19:38.882 "allow_accel_sequence": false, 00:19:38.882 "rdma_max_cq_size": 0, 00:19:38.882 "rdma_cm_event_timeout_ms": 0, 00:19:38.882 "dhchap_digests": [ 00:19:38.882 "sha256", 00:19:38.882 "sha384", 00:19:38.882 "sha512" 00:19:38.882 ], 00:19:38.882 "dhchap_dhgroups": [ 00:19:38.882 "null", 00:19:38.882 "ffdhe2048", 00:19:38.882 "ffdhe3072", 00:19:38.882 "ffdhe4096", 00:19:38.882 "ffdhe6144", 00:19:38.882 "ffdhe8192" 00:19:38.882 ] 00:19:38.882 } 00:19:38.882 }, 00:19:38.882 { 00:19:38.882 "method": "bdev_nvme_attach_controller", 00:19:38.882 "params": { 00:19:38.882 "name": "TLSTEST", 00:19:38.882 "trtype": "TCP", 00:19:38.882 "adrfam": "IPv4", 00:19:38.882 "traddr": "10.0.0.2", 00:19:38.882 "trsvcid": "4420", 00:19:38.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:38.882 "prchk_reftag": false, 00:19:38.882 "prchk_guard": false, 00:19:38.882 "ctrlr_loss_timeout_sec": 0, 00:19:38.882 "reconnect_delay_sec": 0, 00:19:38.882 "fast_io_fail_timeout_sec": 0, 00:19:38.882 "psk": "/tmp/tmp.Uq5xADRNMg", 00:19:38.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:38.882 "hdgst": false, 00:19:38.882 "ddgst": false 00:19:38.882 } 00:19:38.882 }, 00:19:38.882 { 00:19:38.882 "method": "bdev_nvme_set_hotplug", 00:19:38.882 "params": { 00:19:38.882 "period_us": 100000, 00:19:38.882 "enable": false 00:19:38.882 } 00:19:38.882 }, 00:19:38.882 { 00:19:38.882 "method": "bdev_wait_for_examine" 00:19:38.882 } 00:19:38.882 ] 00:19:38.882 }, 00:19:38.882 { 00:19:38.882 "subsystem": "nbd", 00:19:38.882 "config": [] 00:19:38.882 } 00:19:38.882 ] 00:19:38.882 }' 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1039814 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1039814 ']' 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1039814 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1039814 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1039814' 00:19:38.882 killing process with pid 1039814 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1039814 00:19:38.882 Received shutdown signal, test time was about 10.000000 seconds 00:19:38.882 00:19:38.882 Latency(us) 00:19:38.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.882 =================================================================================================================== 00:19:38.882 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.882 [2024-07-15 23:45:27.771992] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:38.882 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1039814 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1039520 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1039520 ']' 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1039520 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1039520 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1039520' 00:19:39.139 killing process with pid 1039520 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1039520 00:19:39.139 [2024-07-15 23:45:27.996389] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:39.139 23:45:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1039520 00:19:39.398 23:45:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:39.398 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.398 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:39.398 23:45:28 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:39.398 "subsystems": [ 00:19:39.398 { 00:19:39.398 "subsystem": "keyring", 00:19:39.398 "config": [] 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "subsystem": "iobuf", 00:19:39.398 "config": [ 00:19:39.398 { 00:19:39.398 "method": "iobuf_set_options", 00:19:39.398 "params": { 00:19:39.398 "small_pool_count": 8192, 00:19:39.398 "large_pool_count": 1024, 00:19:39.398 "small_bufsize": 8192, 00:19:39.398 "large_bufsize": 135168 00:19:39.398 } 00:19:39.398 } 00:19:39.398 ] 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "subsystem": "sock", 00:19:39.398 "config": [ 00:19:39.398 { 00:19:39.398 "method": "sock_set_default_impl", 00:19:39.398 "params": { 00:19:39.398 "impl_name": "posix" 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "sock_impl_set_options", 00:19:39.398 "params": { 00:19:39.398 "impl_name": "ssl", 00:19:39.398 "recv_buf_size": 4096, 00:19:39.398 "send_buf_size": 4096, 00:19:39.398 "enable_recv_pipe": true, 00:19:39.398 "enable_quickack": false, 00:19:39.398 "enable_placement_id": 0, 00:19:39.398 "enable_zerocopy_send_server": true, 00:19:39.398 "enable_zerocopy_send_client": false, 00:19:39.398 "zerocopy_threshold": 0, 00:19:39.398 "tls_version": 0, 00:19:39.398 "enable_ktls": false 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "sock_impl_set_options", 00:19:39.398 "params": { 00:19:39.398 "impl_name": "posix", 00:19:39.398 "recv_buf_size": 2097152, 00:19:39.398 "send_buf_size": 2097152, 00:19:39.398 "enable_recv_pipe": true, 00:19:39.398 "enable_quickack": false, 00:19:39.398 "enable_placement_id": 0, 00:19:39.398 "enable_zerocopy_send_server": true, 00:19:39.398 "enable_zerocopy_send_client": false, 00:19:39.398 "zerocopy_threshold": 0, 00:19:39.398 "tls_version": 0, 00:19:39.398 "enable_ktls": false 00:19:39.398 } 00:19:39.398 } 00:19:39.398 ] 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "subsystem": "vmd", 00:19:39.398 "config": [] 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "subsystem": "accel", 00:19:39.398 "config": [ 00:19:39.398 { 00:19:39.398 "method": "accel_set_options", 00:19:39.398 "params": { 00:19:39.398 "small_cache_size": 128, 00:19:39.398 "large_cache_size": 16, 00:19:39.398 "task_count": 2048, 00:19:39.398 "sequence_count": 2048, 00:19:39.398 "buf_count": 2048 00:19:39.398 } 00:19:39.398 } 00:19:39.398 ] 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "subsystem": "bdev", 00:19:39.398 "config": [ 00:19:39.398 { 00:19:39.398 "method": "bdev_set_options", 00:19:39.398 "params": { 00:19:39.398 "bdev_io_pool_size": 65535, 00:19:39.398 "bdev_io_cache_size": 256, 00:19:39.398 "bdev_auto_examine": true, 00:19:39.398 "iobuf_small_cache_size": 128, 00:19:39.398 "iobuf_large_cache_size": 16 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "bdev_raid_set_options", 00:19:39.398 "params": { 00:19:39.398 "process_window_size_kb": 1024 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "bdev_iscsi_set_options", 00:19:39.398 "params": { 00:19:39.398 "timeout_sec": 30 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "bdev_nvme_set_options", 00:19:39.398 "params": { 00:19:39.398 "action_on_timeout": "none", 00:19:39.398 "timeout_us": 0, 00:19:39.398 "timeout_admin_us": 0, 00:19:39.398 "keep_alive_timeout_ms": 10000, 00:19:39.398 "arbitration_burst": 0, 00:19:39.398 "low_priority_weight": 0, 00:19:39.398 "medium_priority_weight": 0, 00:19:39.398 "high_priority_weight": 0, 00:19:39.398 "nvme_adminq_poll_period_us": 10000, 00:19:39.398 "nvme_ioq_poll_period_us": 0, 00:19:39.398 "io_queue_requests": 0, 00:19:39.398 "delay_cmd_submit": true, 00:19:39.398 "transport_retry_count": 4, 00:19:39.398 "bdev_retry_count": 3, 00:19:39.398 "transport_ack_timeout": 0, 00:19:39.398 "ctrlr_loss_timeout_sec": 0, 00:19:39.398 "reconnect_delay_sec": 0, 00:19:39.398 "fast_io_fail_timeout_sec": 0, 00:19:39.398 "disable_auto_failback": false, 00:19:39.398 "generate_uuids": false, 00:19:39.398 "transport_tos": 0, 00:19:39.398 "nvme_error_stat": false, 00:19:39.398 "rdma_srq_size": 0, 00:19:39.398 "io_path_stat": false, 00:19:39.398 "allow_accel_sequence": false, 00:19:39.398 "rdma_max_cq_size": 0, 00:19:39.398 "rdma_cm_event_timeout_ms": 0, 00:19:39.398 "dhchap_digests": [ 00:19:39.398 "sha256", 00:19:39.398 "sha384", 00:19:39.398 "sha512" 00:19:39.398 ], 00:19:39.398 "dhchap_dhgroups": [ 00:19:39.398 "null", 00:19:39.398 "ffdhe2048", 00:19:39.398 "ffdhe3072", 00:19:39.398 "ffdhe4096", 00:19:39.398 "ffdhe6144", 00:19:39.398 "ffdhe8192" 00:19:39.398 ] 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "bdev_nvme_set_hotplug", 00:19:39.398 "params": { 00:19:39.398 "period_us": 100000, 00:19:39.398 "enable": false 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "bdev_malloc_create", 00:19:39.398 "params": { 00:19:39.398 "name": "malloc0", 00:19:39.398 "num_blocks": 8192, 00:19:39.398 "block_size": 4096, 00:19:39.398 "physical_block_size": 4096, 00:19:39.398 "uuid": "cfa9ff6f-a014-42c9-b8e2-fbfdc07eeff2", 00:19:39.398 "optimal_io_boundary": 0 00:19:39.398 } 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "method": "bdev_wait_for_examine" 00:19:39.398 } 00:19:39.398 ] 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "subsystem": "nbd", 00:19:39.398 "config": [] 00:19:39.398 }, 00:19:39.398 { 00:19:39.398 "subsystem": "scheduler", 00:19:39.398 "config": [ 00:19:39.398 { 00:19:39.398 "method": "framework_set_scheduler", 00:19:39.398 "params": { 00:19:39.398 "name": "static" 00:19:39.398 } 00:19:39.398 } 00:19:39.398 ] 00:19:39.398 }, 00:19:39.398 { 00:19:39.399 "subsystem": "nvmf", 00:19:39.399 "config": [ 00:19:39.399 { 00:19:39.399 "method": "nvmf_set_config", 00:19:39.399 "params": { 00:19:39.399 "discovery_filter": "match_any", 00:19:39.399 "admin_cmd_passthru": { 00:19:39.399 "identify_ctrlr": false 00:19:39.399 } 00:19:39.399 } 00:19:39.399 }, 00:19:39.399 { 00:19:39.399 "method": "nvmf_set_max_subsystems", 00:19:39.399 "params": { 00:19:39.399 "max_subsystems": 1024 00:19:39.399 } 00:19:39.399 }, 00:19:39.399 { 00:19:39.399 "method": "nvmf_set_crdt", 00:19:39.399 "params": { 00:19:39.399 "crdt1": 0, 00:19:39.399 "crdt2": 0, 00:19:39.399 "crdt3": 0 00:19:39.399 } 00:19:39.399 }, 00:19:39.399 { 00:19:39.399 "method": "nvmf_create_transport", 00:19:39.399 "params": { 00:19:39.399 "trtype": "TCP", 00:19:39.399 "max_queue_depth": 128, 00:19:39.399 "max_io_qpairs_per_ctrlr": 127, 00:19:39.399 "in_capsule_data_size": 4096, 00:19:39.399 "max_io_size": 131072, 00:19:39.399 "io_unit_size": 131072, 00:19:39.399 "max_aq_depth": 128, 00:19:39.399 "num_shared_buffers": 511, 00:19:39.399 "buf_cache_size": 4294967295, 00:19:39.399 "dif_insert_or_strip": false, 00:19:39.399 "zcopy": false, 00:19:39.399 "c2h_success": false, 00:19:39.399 "sock_priority": 0, 00:19:39.399 "abort_timeout_sec": 1, 00:19:39.399 "ack_timeout": 0, 00:19:39.399 "data_wr_pool_size": 0 00:19:39.399 } 00:19:39.399 }, 00:19:39.399 { 00:19:39.399 "method": "nvmf_create_subsystem", 00:19:39.399 "params": { 00:19:39.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.399 "allow_any_host": false, 00:19:39.399 "serial_number": "SPDK00000000000001", 00:19:39.399 "model_number": "SPDK bdev Controller", 00:19:39.399 "max_namespaces": 10, 00:19:39.399 "min_cntlid": 1, 00:19:39.399 "max_cntlid": 65519, 00:19:39.399 "ana_reporting": false 00:19:39.399 } 00:19:39.399 }, 00:19:39.399 { 00:19:39.399 "method": "nvmf_subsystem_add_host", 00:19:39.399 "params": { 00:19:39.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.399 "host": "nqn.2016-06.io.spdk:host1", 00:19:39.399 "psk": "/tmp/tmp.Uq5xADRNMg" 00:19:39.399 } 00:19:39.399 }, 00:19:39.399 { 00:19:39.399 "method": "nvmf_subsystem_add_ns", 00:19:39.399 "params": { 00:19:39.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.399 "namespace": { 00:19:39.399 "nsid": 1, 00:19:39.399 "bdev_name": "malloc0", 00:19:39.399 "nguid": "CFA9FF6FA01442C9B8E2FBFDC07EEFF2", 00:19:39.399 "uuid": "cfa9ff6f-a014-42c9-b8e2-fbfdc07eeff2", 00:19:39.399 "no_auto_visible": false 00:19:39.399 } 00:19:39.399 } 00:19:39.399 }, 00:19:39.399 { 00:19:39.399 "method": "nvmf_subsystem_add_listener", 00:19:39.399 "params": { 00:19:39.399 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.399 "listen_address": { 00:19:39.399 "trtype": "TCP", 00:19:39.399 "adrfam": "IPv4", 00:19:39.399 "traddr": "10.0.0.2", 00:19:39.399 "trsvcid": "4420" 00:19:39.399 }, 00:19:39.399 "secure_channel": true 00:19:39.399 } 00:19:39.399 } 00:19:39.399 ] 00:19:39.399 } 00:19:39.399 ] 00:19:39.399 }' 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1040096 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1040096 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1040096 ']' 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:39.399 23:45:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:39.399 [2024-07-15 23:45:28.240106] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:39.399 [2024-07-15 23:45:28.240153] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.399 [2024-07-15 23:45:28.298044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.658 [2024-07-15 23:45:28.376367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.658 [2024-07-15 23:45:28.376401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.658 [2024-07-15 23:45:28.376416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.658 [2024-07-15 23:45:28.376422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.658 [2024-07-15 23:45:28.376427] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.658 [2024-07-15 23:45:28.376477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.658 [2024-07-15 23:45:28.578197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.658 [2024-07-15 23:45:28.594171] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:39.658 [2024-07-15 23:45:28.610227] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:39.658 [2024-07-15 23:45:28.618466] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1040275 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1040275 /var/tmp/bdevperf.sock 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1040275 ']' 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.228 23:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:40.228 "subsystems": [ 00:19:40.228 { 00:19:40.228 "subsystem": "keyring", 00:19:40.228 "config": [] 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "subsystem": "iobuf", 00:19:40.228 "config": [ 00:19:40.228 { 00:19:40.228 "method": "iobuf_set_options", 00:19:40.228 "params": { 00:19:40.228 "small_pool_count": 8192, 00:19:40.228 "large_pool_count": 1024, 00:19:40.228 "small_bufsize": 8192, 00:19:40.228 "large_bufsize": 135168 00:19:40.228 } 00:19:40.228 } 00:19:40.228 ] 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "subsystem": "sock", 00:19:40.228 "config": [ 00:19:40.228 { 00:19:40.228 "method": "sock_set_default_impl", 00:19:40.228 "params": { 00:19:40.228 "impl_name": "posix" 00:19:40.228 } 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "method": "sock_impl_set_options", 00:19:40.228 "params": { 00:19:40.228 "impl_name": "ssl", 00:19:40.228 "recv_buf_size": 4096, 00:19:40.228 "send_buf_size": 4096, 00:19:40.228 "enable_recv_pipe": true, 00:19:40.228 "enable_quickack": false, 00:19:40.228 "enable_placement_id": 0, 00:19:40.228 "enable_zerocopy_send_server": true, 00:19:40.228 "enable_zerocopy_send_client": false, 00:19:40.228 "zerocopy_threshold": 0, 00:19:40.228 "tls_version": 0, 00:19:40.228 "enable_ktls": false 00:19:40.228 } 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "method": "sock_impl_set_options", 00:19:40.228 "params": { 00:19:40.228 "impl_name": "posix", 00:19:40.228 "recv_buf_size": 2097152, 00:19:40.228 "send_buf_size": 2097152, 00:19:40.228 "enable_recv_pipe": true, 00:19:40.228 "enable_quickack": false, 00:19:40.228 "enable_placement_id": 0, 00:19:40.228 "enable_zerocopy_send_server": true, 00:19:40.228 "enable_zerocopy_send_client": false, 00:19:40.228 "zerocopy_threshold": 0, 00:19:40.228 "tls_version": 0, 00:19:40.228 "enable_ktls": false 00:19:40.228 } 00:19:40.228 } 00:19:40.228 ] 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "subsystem": "vmd", 00:19:40.228 "config": [] 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "subsystem": "accel", 00:19:40.228 "config": [ 00:19:40.228 { 00:19:40.228 "method": "accel_set_options", 00:19:40.228 "params": { 00:19:40.228 "small_cache_size": 128, 00:19:40.228 "large_cache_size": 16, 00:19:40.228 "task_count": 2048, 00:19:40.228 "sequence_count": 2048, 00:19:40.228 "buf_count": 2048 00:19:40.228 } 00:19:40.228 } 00:19:40.228 ] 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "subsystem": "bdev", 00:19:40.228 "config": [ 00:19:40.228 { 00:19:40.228 "method": "bdev_set_options", 00:19:40.228 "params": { 00:19:40.228 "bdev_io_pool_size": 65535, 00:19:40.228 "bdev_io_cache_size": 256, 00:19:40.228 "bdev_auto_examine": true, 00:19:40.228 "iobuf_small_cache_size": 128, 00:19:40.228 "iobuf_large_cache_size": 16 00:19:40.228 } 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "method": "bdev_raid_set_options", 00:19:40.228 "params": { 00:19:40.228 "process_window_size_kb": 1024 00:19:40.228 } 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "method": "bdev_iscsi_set_options", 00:19:40.228 "params": { 00:19:40.228 "timeout_sec": 30 00:19:40.228 } 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "method": "bdev_nvme_set_options", 00:19:40.228 "params": { 00:19:40.228 "action_on_timeout": "none", 00:19:40.228 "timeout_us": 0, 00:19:40.228 "timeout_admin_us": 0, 00:19:40.228 "keep_alive_timeout_ms": 10000, 00:19:40.228 "arbitration_burst": 0, 00:19:40.228 "low_priority_weight": 0, 00:19:40.228 "medium_priority_weight": 0, 00:19:40.228 "high_priority_weight": 0, 00:19:40.228 "nvme_adminq_poll_period_us": 10000, 00:19:40.228 "nvme_ioq_poll_period_us": 0, 00:19:40.228 "io_queue_requests": 512, 00:19:40.228 "delay_cmd_submit": true, 00:19:40.228 "transport_retry_count": 4, 00:19:40.228 "bdev_retry_count": 3, 00:19:40.228 "transport_ack_timeout": 0, 00:19:40.228 "ctrlr_loss_timeout_sec": 0, 00:19:40.228 "reconnect_delay_sec": 0, 00:19:40.228 "fast_io_fail_timeout_sec": 0, 00:19:40.228 "disable_auto_failback": false, 00:19:40.228 "generate_uuids": false, 00:19:40.228 "transport_tos": 0, 00:19:40.228 "nvme_error_stat": false, 00:19:40.228 "rdma_srq_size": 0, 00:19:40.228 "io_path_stat": false, 00:19:40.228 "allow_accel_sequence": false, 00:19:40.228 "rdma_max_cq_size": 0, 00:19:40.228 "rdma_cm_event_timeout_ms": 0, 00:19:40.228 "dhchap_digests": [ 00:19:40.228 "sha256", 00:19:40.228 "sha384", 00:19:40.228 "sha512" 00:19:40.228 ], 00:19:40.228 "dhchap_dhgroups": [ 00:19:40.228 "null", 00:19:40.228 "ffdhe2048", 00:19:40.228 "ffdhe3072", 00:19:40.228 "ffdhe4096", 00:19:40.228 "ffdhe6144", 00:19:40.228 "ffdhe8192" 00:19:40.228 ] 00:19:40.228 } 00:19:40.228 }, 00:19:40.228 { 00:19:40.228 "method": "bdev_nvme_attach_controller", 00:19:40.228 "params": { 00:19:40.228 "name": "TLSTEST", 00:19:40.228 "trtype": "TCP", 00:19:40.228 "adrfam": "IPv4", 00:19:40.228 "traddr": "10.0.0.2", 00:19:40.228 "trsvcid": "4420", 00:19:40.228 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.228 "prchk_reftag": false, 00:19:40.228 "prchk_guard": false, 00:19:40.228 "ctrlr_loss_timeout_sec": 0, 00:19:40.228 "reconnect_delay_sec": 0, 00:19:40.228 "fast_io_fail_timeout_sec": 0, 00:19:40.228 "psk": "/tmp/tmp.Uq5xADRNMg", 00:19:40.228 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.228 "hdgst": false, 00:19:40.228 "ddgst": false 00:19:40.228 } 00:19:40.229 }, 00:19:40.229 { 00:19:40.229 "method": "bdev_nvme_set_hotplug", 00:19:40.229 "params": { 00:19:40.229 "period_us": 100000, 00:19:40.229 "enable": false 00:19:40.229 } 00:19:40.229 }, 00:19:40.229 { 00:19:40.229 "method": "bdev_wait_for_examine" 00:19:40.229 } 00:19:40.229 ] 00:19:40.229 }, 00:19:40.229 { 00:19:40.229 "subsystem": "nbd", 00:19:40.229 "config": [] 00:19:40.229 } 00:19:40.229 ] 00:19:40.229 }' 00:19:40.229 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:40.229 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:40.229 [2024-07-15 23:45:29.118950] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:40.229 [2024-07-15 23:45:29.118998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1040275 ] 00:19:40.229 [2024-07-15 23:45:29.167964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.487 [2024-07-15 23:45:29.240502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.487 [2024-07-15 23:45:29.383176] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:40.487 [2024-07-15 23:45:29.383256] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:41.054 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:41.054 23:45:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:41.054 23:45:29 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:41.054 Running I/O for 10 seconds... 00:19:53.256 00:19:53.256 Latency(us) 00:19:53.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.256 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:53.256 Verification LBA range: start 0x0 length 0x2000 00:19:53.256 TLSTESTn1 : 10.02 5550.59 21.68 0.00 0.00 23022.43 5812.76 46502.07 00:19:53.256 =================================================================================================================== 00:19:53.256 Total : 5550.59 21.68 0.00 0.00 23022.43 5812.76 46502.07 00:19:53.256 0 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1040275 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1040275 ']' 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1040275 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1040275 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1040275' 00:19:53.256 killing process with pid 1040275 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1040275 00:19:53.256 Received shutdown signal, test time was about 10.000000 seconds 00:19:53.256 00:19:53.256 Latency(us) 00:19:53.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.256 =================================================================================================================== 00:19:53.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.256 [2024-07-15 23:45:40.104714] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1040275 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1040096 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1040096 ']' 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1040096 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1040096 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1040096' 00:19:53.256 killing process with pid 1040096 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1040096 00:19:53.256 [2024-07-15 23:45:40.336882] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1040096 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1042130 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1042130 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1042130 ']' 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.256 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:53.257 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.257 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:53.257 23:45:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.257 [2024-07-15 23:45:40.575158] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:53.257 [2024-07-15 23:45:40.575202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.257 [2024-07-15 23:45:40.632558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.257 [2024-07-15 23:45:40.712287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.257 [2024-07-15 23:45:40.712326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.257 [2024-07-15 23:45:40.712333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.257 [2024-07-15 23:45:40.712340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.257 [2024-07-15 23:45:40.712345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.257 [2024-07-15 23:45:40.712363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Uq5xADRNMg 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Uq5xADRNMg 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.257 [2024-07-15 23:45:41.577288] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.257 [2024-07-15 23:45:41.910138] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.257 [2024-07-15 23:45:41.910321] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.257 23:45:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.257 malloc0 00:19:53.257 23:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:53.566 23:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Uq5xADRNMg 00:19:53.566 [2024-07-15 23:45:42.427651] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:53.566 23:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1042560 00:19:53.566 23:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1042560 /var/tmp/bdevperf.sock 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1042560 ']' 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:53.567 23:45:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.567 [2024-07-15 23:45:42.490527] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:53.567 [2024-07-15 23:45:42.490572] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1042560 ] 00:19:53.824 [2024-07-15 23:45:42.543861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.824 [2024-07-15 23:45:42.617332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.391 23:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:54.391 23:45:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:54.391 23:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Uq5xADRNMg 00:19:54.648 23:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:54.648 [2024-07-15 23:45:43.605179] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.916 nvme0n1 00:19:54.916 23:45:43 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:54.916 Running I/O for 1 seconds... 00:19:55.860 00:19:55.860 Latency(us) 00:19:55.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.860 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:55.860 Verification LBA range: start 0x0 length 0x2000 00:19:55.860 nvme0n1 : 1.02 5063.39 19.78 0.00 0.00 25067.83 5926.73 55164.22 00:19:55.860 =================================================================================================================== 00:19:55.860 Total : 5063.39 19.78 0.00 0.00 25067.83 5926.73 55164.22 00:19:55.860 0 00:19:55.860 23:45:44 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1042560 00:19:55.860 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1042560 ']' 00:19:55.860 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1042560 00:19:55.860 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:55.860 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:55.860 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1042560 00:19:56.118 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:56.118 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:56.118 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1042560' 00:19:56.118 killing process with pid 1042560 00:19:56.118 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1042560 00:19:56.118 Received shutdown signal, test time was about 1.000000 seconds 00:19:56.118 00:19:56.118 Latency(us) 00:19:56.118 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.118 =================================================================================================================== 00:19:56.118 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.118 23:45:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1042560 00:19:56.118 23:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1042130 00:19:56.118 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1042130 ']' 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1042130 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1042130 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1042130' 00:19:56.119 killing process with pid 1042130 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1042130 00:19:56.119 [2024-07-15 23:45:45.066953] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:56.119 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1042130 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1043037 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1043037 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1043037 ']' 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.377 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:56.378 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.378 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:56.378 23:45:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.378 [2024-07-15 23:45:45.304649] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:56.378 [2024-07-15 23:45:45.304695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.636 [2024-07-15 23:45:45.361298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.636 [2024-07-15 23:45:45.439243] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.636 [2024-07-15 23:45:45.439279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.636 [2024-07-15 23:45:45.439286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.636 [2024-07-15 23:45:45.439292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.636 [2024-07-15 23:45:45.439297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.636 [2024-07-15 23:45:45.439321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:57.203 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.203 [2024-07-15 23:45:46.149419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.203 malloc0 00:19:57.462 [2024-07-15 23:45:46.177694] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.462 [2024-07-15 23:45:46.177875] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1043107 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1043107 /var/tmp/bdevperf.sock 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1043107 ']' 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:57.462 23:45:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.462 [2024-07-15 23:45:46.251646] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:57.462 [2024-07-15 23:45:46.251686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043107 ] 00:19:57.462 [2024-07-15 23:45:46.303406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.462 [2024-07-15 23:45:46.379915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.398 23:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:58.398 23:45:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:19:58.398 23:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Uq5xADRNMg 00:19:58.398 23:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:58.398 [2024-07-15 23:45:47.367740] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.656 nvme0n1 00:19:58.657 23:45:47 nvmf_tcp.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:58.657 Running I/O for 1 seconds... 00:19:59.591 00:19:59.591 Latency(us) 00:19:59.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.591 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.591 Verification LBA range: start 0x0 length 0x2000 00:19:59.591 nvme0n1 : 1.02 4988.29 19.49 0.00 0.00 25443.23 4815.47 78415.25 00:19:59.591 =================================================================================================================== 00:19:59.591 Total : 4988.29 19.49 0.00 0.00 25443.23 4815.47 78415.25 00:19:59.591 0 00:19:59.849 23:45:48 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:59.849 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:59.849 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.849 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:59.849 23:45:48 nvmf_tcp.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:59.849 "subsystems": [ 00:19:59.849 { 00:19:59.849 "subsystem": "keyring", 00:19:59.849 "config": [ 00:19:59.849 { 00:19:59.849 "method": "keyring_file_add_key", 00:19:59.849 "params": { 00:19:59.850 "name": "key0", 00:19:59.850 "path": "/tmp/tmp.Uq5xADRNMg" 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "iobuf", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "iobuf_set_options", 00:19:59.850 "params": { 00:19:59.850 "small_pool_count": 8192, 00:19:59.850 "large_pool_count": 1024, 00:19:59.850 "small_bufsize": 8192, 00:19:59.850 "large_bufsize": 135168 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "sock", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "sock_set_default_impl", 00:19:59.850 "params": { 00:19:59.850 "impl_name": "posix" 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "sock_impl_set_options", 00:19:59.850 "params": { 00:19:59.850 "impl_name": "ssl", 00:19:59.850 "recv_buf_size": 4096, 00:19:59.850 "send_buf_size": 4096, 00:19:59.850 "enable_recv_pipe": true, 00:19:59.850 "enable_quickack": false, 00:19:59.850 "enable_placement_id": 0, 00:19:59.850 "enable_zerocopy_send_server": true, 00:19:59.850 "enable_zerocopy_send_client": false, 00:19:59.850 "zerocopy_threshold": 0, 00:19:59.850 "tls_version": 0, 00:19:59.850 "enable_ktls": false 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "sock_impl_set_options", 00:19:59.850 "params": { 00:19:59.850 "impl_name": "posix", 00:19:59.850 "recv_buf_size": 2097152, 00:19:59.850 "send_buf_size": 2097152, 00:19:59.850 "enable_recv_pipe": true, 00:19:59.850 "enable_quickack": false, 00:19:59.850 "enable_placement_id": 0, 00:19:59.850 "enable_zerocopy_send_server": true, 00:19:59.850 "enable_zerocopy_send_client": false, 00:19:59.850 "zerocopy_threshold": 0, 00:19:59.850 "tls_version": 0, 00:19:59.850 "enable_ktls": false 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "vmd", 00:19:59.850 "config": [] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "accel", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "accel_set_options", 00:19:59.850 "params": { 00:19:59.850 "small_cache_size": 128, 00:19:59.850 "large_cache_size": 16, 00:19:59.850 "task_count": 2048, 00:19:59.850 "sequence_count": 2048, 00:19:59.850 "buf_count": 2048 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "bdev", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "bdev_set_options", 00:19:59.850 "params": { 00:19:59.850 "bdev_io_pool_size": 65535, 00:19:59.850 "bdev_io_cache_size": 256, 00:19:59.850 "bdev_auto_examine": true, 00:19:59.850 "iobuf_small_cache_size": 128, 00:19:59.850 "iobuf_large_cache_size": 16 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_raid_set_options", 00:19:59.850 "params": { 00:19:59.850 "process_window_size_kb": 1024 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_iscsi_set_options", 00:19:59.850 "params": { 00:19:59.850 "timeout_sec": 30 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_nvme_set_options", 00:19:59.850 "params": { 00:19:59.850 "action_on_timeout": "none", 00:19:59.850 "timeout_us": 0, 00:19:59.850 "timeout_admin_us": 0, 00:19:59.850 "keep_alive_timeout_ms": 10000, 00:19:59.850 "arbitration_burst": 0, 00:19:59.850 "low_priority_weight": 0, 00:19:59.850 "medium_priority_weight": 0, 00:19:59.850 "high_priority_weight": 0, 00:19:59.850 "nvme_adminq_poll_period_us": 10000, 00:19:59.850 "nvme_ioq_poll_period_us": 0, 00:19:59.850 "io_queue_requests": 0, 00:19:59.850 "delay_cmd_submit": true, 00:19:59.850 "transport_retry_count": 4, 00:19:59.850 "bdev_retry_count": 3, 00:19:59.850 "transport_ack_timeout": 0, 00:19:59.850 "ctrlr_loss_timeout_sec": 0, 00:19:59.850 "reconnect_delay_sec": 0, 00:19:59.850 "fast_io_fail_timeout_sec": 0, 00:19:59.850 "disable_auto_failback": false, 00:19:59.850 "generate_uuids": false, 00:19:59.850 "transport_tos": 0, 00:19:59.850 "nvme_error_stat": false, 00:19:59.850 "rdma_srq_size": 0, 00:19:59.850 "io_path_stat": false, 00:19:59.850 "allow_accel_sequence": false, 00:19:59.850 "rdma_max_cq_size": 0, 00:19:59.850 "rdma_cm_event_timeout_ms": 0, 00:19:59.850 "dhchap_digests": [ 00:19:59.850 "sha256", 00:19:59.850 "sha384", 00:19:59.850 "sha512" 00:19:59.850 ], 00:19:59.850 "dhchap_dhgroups": [ 00:19:59.850 "null", 00:19:59.850 "ffdhe2048", 00:19:59.850 "ffdhe3072", 00:19:59.850 "ffdhe4096", 00:19:59.850 "ffdhe6144", 00:19:59.850 "ffdhe8192" 00:19:59.850 ] 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_nvme_set_hotplug", 00:19:59.850 "params": { 00:19:59.850 "period_us": 100000, 00:19:59.850 "enable": false 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_malloc_create", 00:19:59.850 "params": { 00:19:59.850 "name": "malloc0", 00:19:59.850 "num_blocks": 8192, 00:19:59.850 "block_size": 4096, 00:19:59.850 "physical_block_size": 4096, 00:19:59.850 "uuid": "949af5fa-115e-4fcf-a2c1-669f9000d54f", 00:19:59.850 "optimal_io_boundary": 0 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "bdev_wait_for_examine" 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "nbd", 00:19:59.850 "config": [] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "scheduler", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "framework_set_scheduler", 00:19:59.850 "params": { 00:19:59.850 "name": "static" 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "subsystem": "nvmf", 00:19:59.850 "config": [ 00:19:59.850 { 00:19:59.850 "method": "nvmf_set_config", 00:19:59.850 "params": { 00:19:59.850 "discovery_filter": "match_any", 00:19:59.850 "admin_cmd_passthru": { 00:19:59.850 "identify_ctrlr": false 00:19:59.850 } 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_set_max_subsystems", 00:19:59.850 "params": { 00:19:59.850 "max_subsystems": 1024 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_set_crdt", 00:19:59.850 "params": { 00:19:59.850 "crdt1": 0, 00:19:59.850 "crdt2": 0, 00:19:59.850 "crdt3": 0 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_create_transport", 00:19:59.850 "params": { 00:19:59.850 "trtype": "TCP", 00:19:59.850 "max_queue_depth": 128, 00:19:59.850 "max_io_qpairs_per_ctrlr": 127, 00:19:59.850 "in_capsule_data_size": 4096, 00:19:59.850 "max_io_size": 131072, 00:19:59.850 "io_unit_size": 131072, 00:19:59.850 "max_aq_depth": 128, 00:19:59.850 "num_shared_buffers": 511, 00:19:59.850 "buf_cache_size": 4294967295, 00:19:59.850 "dif_insert_or_strip": false, 00:19:59.850 "zcopy": false, 00:19:59.850 "c2h_success": false, 00:19:59.850 "sock_priority": 0, 00:19:59.850 "abort_timeout_sec": 1, 00:19:59.850 "ack_timeout": 0, 00:19:59.850 "data_wr_pool_size": 0 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_create_subsystem", 00:19:59.850 "params": { 00:19:59.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.850 "allow_any_host": false, 00:19:59.850 "serial_number": "00000000000000000000", 00:19:59.850 "model_number": "SPDK bdev Controller", 00:19:59.850 "max_namespaces": 32, 00:19:59.850 "min_cntlid": 1, 00:19:59.850 "max_cntlid": 65519, 00:19:59.850 "ana_reporting": false 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_subsystem_add_host", 00:19:59.850 "params": { 00:19:59.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.850 "host": "nqn.2016-06.io.spdk:host1", 00:19:59.850 "psk": "key0" 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_subsystem_add_ns", 00:19:59.850 "params": { 00:19:59.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.850 "namespace": { 00:19:59.850 "nsid": 1, 00:19:59.850 "bdev_name": "malloc0", 00:19:59.850 "nguid": "949AF5FA115E4FCFA2C1669F9000D54F", 00:19:59.850 "uuid": "949af5fa-115e-4fcf-a2c1-669f9000d54f", 00:19:59.850 "no_auto_visible": false 00:19:59.850 } 00:19:59.850 } 00:19:59.850 }, 00:19:59.850 { 00:19:59.850 "method": "nvmf_subsystem_add_listener", 00:19:59.850 "params": { 00:19:59.850 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.850 "listen_address": { 00:19:59.850 "trtype": "TCP", 00:19:59.850 "adrfam": "IPv4", 00:19:59.850 "traddr": "10.0.0.2", 00:19:59.850 "trsvcid": "4420" 00:19:59.850 }, 00:19:59.850 "secure_channel": false, 00:19:59.850 "sock_impl": "ssl" 00:19:59.850 } 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 } 00:19:59.850 ] 00:19:59.850 }' 00:19:59.850 23:45:48 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:00.109 23:45:48 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:00.109 "subsystems": [ 00:20:00.109 { 00:20:00.109 "subsystem": "keyring", 00:20:00.109 "config": [ 00:20:00.109 { 00:20:00.109 "method": "keyring_file_add_key", 00:20:00.109 "params": { 00:20:00.109 "name": "key0", 00:20:00.109 "path": "/tmp/tmp.Uq5xADRNMg" 00:20:00.109 } 00:20:00.109 } 00:20:00.109 ] 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "subsystem": "iobuf", 00:20:00.109 "config": [ 00:20:00.109 { 00:20:00.109 "method": "iobuf_set_options", 00:20:00.109 "params": { 00:20:00.109 "small_pool_count": 8192, 00:20:00.109 "large_pool_count": 1024, 00:20:00.109 "small_bufsize": 8192, 00:20:00.109 "large_bufsize": 135168 00:20:00.109 } 00:20:00.109 } 00:20:00.109 ] 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "subsystem": "sock", 00:20:00.109 "config": [ 00:20:00.109 { 00:20:00.109 "method": "sock_set_default_impl", 00:20:00.109 "params": { 00:20:00.109 "impl_name": "posix" 00:20:00.109 } 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "method": "sock_impl_set_options", 00:20:00.109 "params": { 00:20:00.109 "impl_name": "ssl", 00:20:00.109 "recv_buf_size": 4096, 00:20:00.109 "send_buf_size": 4096, 00:20:00.109 "enable_recv_pipe": true, 00:20:00.109 "enable_quickack": false, 00:20:00.109 "enable_placement_id": 0, 00:20:00.109 "enable_zerocopy_send_server": true, 00:20:00.109 "enable_zerocopy_send_client": false, 00:20:00.109 "zerocopy_threshold": 0, 00:20:00.109 "tls_version": 0, 00:20:00.109 "enable_ktls": false 00:20:00.109 } 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "method": "sock_impl_set_options", 00:20:00.109 "params": { 00:20:00.109 "impl_name": "posix", 00:20:00.109 "recv_buf_size": 2097152, 00:20:00.109 "send_buf_size": 2097152, 00:20:00.109 "enable_recv_pipe": true, 00:20:00.109 "enable_quickack": false, 00:20:00.109 "enable_placement_id": 0, 00:20:00.109 "enable_zerocopy_send_server": true, 00:20:00.109 "enable_zerocopy_send_client": false, 00:20:00.109 "zerocopy_threshold": 0, 00:20:00.109 "tls_version": 0, 00:20:00.109 "enable_ktls": false 00:20:00.109 } 00:20:00.109 } 00:20:00.109 ] 00:20:00.109 }, 00:20:00.109 { 00:20:00.109 "subsystem": "vmd", 00:20:00.109 "config": [] 00:20:00.109 }, 00:20:00.110 { 00:20:00.110 "subsystem": "accel", 00:20:00.110 "config": [ 00:20:00.110 { 00:20:00.110 "method": "accel_set_options", 00:20:00.110 "params": { 00:20:00.110 "small_cache_size": 128, 00:20:00.110 "large_cache_size": 16, 00:20:00.110 "task_count": 2048, 00:20:00.110 "sequence_count": 2048, 00:20:00.110 "buf_count": 2048 00:20:00.110 } 00:20:00.110 } 00:20:00.110 ] 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "subsystem": "bdev", 00:20:00.110 "config": [ 00:20:00.110 { 00:20:00.110 "method": "bdev_set_options", 00:20:00.110 "params": { 00:20:00.110 "bdev_io_pool_size": 65535, 00:20:00.110 "bdev_io_cache_size": 256, 00:20:00.110 "bdev_auto_examine": true, 00:20:00.110 "iobuf_small_cache_size": 128, 00:20:00.110 "iobuf_large_cache_size": 16 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_raid_set_options", 00:20:00.110 "params": { 00:20:00.110 "process_window_size_kb": 1024 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_iscsi_set_options", 00:20:00.110 "params": { 00:20:00.110 "timeout_sec": 30 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_nvme_set_options", 00:20:00.110 "params": { 00:20:00.110 "action_on_timeout": "none", 00:20:00.110 "timeout_us": 0, 00:20:00.110 "timeout_admin_us": 0, 00:20:00.110 "keep_alive_timeout_ms": 10000, 00:20:00.110 "arbitration_burst": 0, 00:20:00.110 "low_priority_weight": 0, 00:20:00.110 "medium_priority_weight": 0, 00:20:00.110 "high_priority_weight": 0, 00:20:00.110 "nvme_adminq_poll_period_us": 10000, 00:20:00.110 "nvme_ioq_poll_period_us": 0, 00:20:00.110 "io_queue_requests": 512, 00:20:00.110 "delay_cmd_submit": true, 00:20:00.110 "transport_retry_count": 4, 00:20:00.110 "bdev_retry_count": 3, 00:20:00.110 "transport_ack_timeout": 0, 00:20:00.110 "ctrlr_loss_timeout_sec": 0, 00:20:00.110 "reconnect_delay_sec": 0, 00:20:00.110 "fast_io_fail_timeout_sec": 0, 00:20:00.110 "disable_auto_failback": false, 00:20:00.110 "generate_uuids": false, 00:20:00.110 "transport_tos": 0, 00:20:00.110 "nvme_error_stat": false, 00:20:00.110 "rdma_srq_size": 0, 00:20:00.110 "io_path_stat": false, 00:20:00.110 "allow_accel_sequence": false, 00:20:00.110 "rdma_max_cq_size": 0, 00:20:00.110 "rdma_cm_event_timeout_ms": 0, 00:20:00.110 "dhchap_digests": [ 00:20:00.110 "sha256", 00:20:00.110 "sha384", 00:20:00.110 "sha512" 00:20:00.110 ], 00:20:00.110 "dhchap_dhgroups": [ 00:20:00.110 "null", 00:20:00.110 "ffdhe2048", 00:20:00.110 "ffdhe3072", 00:20:00.110 "ffdhe4096", 00:20:00.110 "ffdhe6144", 00:20:00.110 "ffdhe8192" 00:20:00.110 ] 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_nvme_attach_controller", 00:20:00.110 "params": { 00:20:00.110 "name": "nvme0", 00:20:00.110 "trtype": "TCP", 00:20:00.110 "adrfam": "IPv4", 00:20:00.110 "traddr": "10.0.0.2", 00:20:00.110 "trsvcid": "4420", 00:20:00.110 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.110 "prchk_reftag": false, 00:20:00.110 "prchk_guard": false, 00:20:00.110 "ctrlr_loss_timeout_sec": 0, 00:20:00.110 "reconnect_delay_sec": 0, 00:20:00.110 "fast_io_fail_timeout_sec": 0, 00:20:00.110 "psk": "key0", 00:20:00.110 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.110 "hdgst": false, 00:20:00.110 "ddgst": false 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_nvme_set_hotplug", 00:20:00.110 "params": { 00:20:00.110 "period_us": 100000, 00:20:00.110 "enable": false 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_enable_histogram", 00:20:00.110 "params": { 00:20:00.110 "name": "nvme0n1", 00:20:00.110 "enable": true 00:20:00.110 } 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "method": "bdev_wait_for_examine" 00:20:00.110 } 00:20:00.110 ] 00:20:00.110 }, 00:20:00.110 { 00:20:00.110 "subsystem": "nbd", 00:20:00.110 "config": [] 00:20:00.110 } 00:20:00.110 ] 00:20:00.110 }' 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- target/tls.sh@268 -- # killprocess 1043107 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1043107 ']' 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1043107 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1043107 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1043107' 00:20:00.110 killing process with pid 1043107 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1043107 00:20:00.110 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.110 00:20:00.110 Latency(us) 00:20:00.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.110 =================================================================================================================== 00:20:00.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.110 23:45:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1043107 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # killprocess 1043037 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1043037 ']' 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1043037 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1043037 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1043037' 00:20:00.369 killing process with pid 1043037 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1043037 00:20:00.369 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1043037 00:20:00.629 23:45:49 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:00.629 23:45:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.629 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:00.629 23:45:49 nvmf_tcp.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:00.629 "subsystems": [ 00:20:00.629 { 00:20:00.629 "subsystem": "keyring", 00:20:00.629 "config": [ 00:20:00.629 { 00:20:00.629 "method": "keyring_file_add_key", 00:20:00.629 "params": { 00:20:00.629 "name": "key0", 00:20:00.629 "path": "/tmp/tmp.Uq5xADRNMg" 00:20:00.629 } 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "iobuf", 00:20:00.629 "config": [ 00:20:00.629 { 00:20:00.629 "method": "iobuf_set_options", 00:20:00.629 "params": { 00:20:00.629 "small_pool_count": 8192, 00:20:00.629 "large_pool_count": 1024, 00:20:00.629 "small_bufsize": 8192, 00:20:00.629 "large_bufsize": 135168 00:20:00.629 } 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "sock", 00:20:00.629 "config": [ 00:20:00.629 { 00:20:00.629 "method": "sock_set_default_impl", 00:20:00.629 "params": { 00:20:00.629 "impl_name": "posix" 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "sock_impl_set_options", 00:20:00.629 "params": { 00:20:00.629 "impl_name": "ssl", 00:20:00.629 "recv_buf_size": 4096, 00:20:00.629 "send_buf_size": 4096, 00:20:00.629 "enable_recv_pipe": true, 00:20:00.629 "enable_quickack": false, 00:20:00.629 "enable_placement_id": 0, 00:20:00.629 "enable_zerocopy_send_server": true, 00:20:00.629 "enable_zerocopy_send_client": false, 00:20:00.629 "zerocopy_threshold": 0, 00:20:00.629 "tls_version": 0, 00:20:00.629 "enable_ktls": false 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "sock_impl_set_options", 00:20:00.629 "params": { 00:20:00.629 "impl_name": "posix", 00:20:00.629 "recv_buf_size": 2097152, 00:20:00.629 "send_buf_size": 2097152, 00:20:00.629 "enable_recv_pipe": true, 00:20:00.629 "enable_quickack": false, 00:20:00.629 "enable_placement_id": 0, 00:20:00.629 "enable_zerocopy_send_server": true, 00:20:00.629 "enable_zerocopy_send_client": false, 00:20:00.629 "zerocopy_threshold": 0, 00:20:00.629 "tls_version": 0, 00:20:00.629 "enable_ktls": false 00:20:00.629 } 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "vmd", 00:20:00.629 "config": [] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "accel", 00:20:00.629 "config": [ 00:20:00.629 { 00:20:00.629 "method": "accel_set_options", 00:20:00.629 "params": { 00:20:00.629 "small_cache_size": 128, 00:20:00.629 "large_cache_size": 16, 00:20:00.629 "task_count": 2048, 00:20:00.629 "sequence_count": 2048, 00:20:00.629 "buf_count": 2048 00:20:00.629 } 00:20:00.629 } 00:20:00.629 ] 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "subsystem": "bdev", 00:20:00.629 "config": [ 00:20:00.629 { 00:20:00.629 "method": "bdev_set_options", 00:20:00.629 "params": { 00:20:00.629 "bdev_io_pool_size": 65535, 00:20:00.629 "bdev_io_cache_size": 256, 00:20:00.629 "bdev_auto_examine": true, 00:20:00.629 "iobuf_small_cache_size": 128, 00:20:00.629 "iobuf_large_cache_size": 16 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "bdev_raid_set_options", 00:20:00.629 "params": { 00:20:00.629 "process_window_size_kb": 1024 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "bdev_iscsi_set_options", 00:20:00.629 "params": { 00:20:00.629 "timeout_sec": 30 00:20:00.629 } 00:20:00.629 }, 00:20:00.629 { 00:20:00.629 "method": "bdev_nvme_set_options", 00:20:00.629 "params": { 00:20:00.629 "action_on_timeout": "none", 00:20:00.629 "timeout_us": 0, 00:20:00.629 "timeout_admin_us": 0, 00:20:00.629 "keep_alive_timeout_ms": 10000, 00:20:00.629 "arbitration_burst": 0, 00:20:00.629 "low_priority_weight": 0, 00:20:00.629 "medium_priority_weight": 0, 00:20:00.629 "high_priority_weight": 0, 00:20:00.629 "nvme_adminq_poll_period_us": 10000, 00:20:00.629 "nvme_ioq_poll_period_us": 0, 00:20:00.629 "io_queue_requests": 0, 00:20:00.629 "delay_cmd_submit": true, 00:20:00.629 "transport_retry_count": 4, 00:20:00.629 "bdev_retry_count": 3, 00:20:00.629 "transport_ack_timeout": 0, 00:20:00.629 "ctrlr_loss_timeout_sec": 0, 00:20:00.629 "reconnect_delay_sec": 0, 00:20:00.629 "fast_io_fail_timeout_sec": 0, 00:20:00.629 "disable_auto_failback": false, 00:20:00.629 "generate_uuids": false, 00:20:00.629 "transport_tos": 0, 00:20:00.629 "nvme_error_stat": false, 00:20:00.629 "rdma_srq_size": 0, 00:20:00.629 "io_path_stat": false, 00:20:00.629 "allow_accel_sequence": false, 00:20:00.629 "rdma_max_cq_size": 0, 00:20:00.629 "rdma_cm_event_timeout_ms": 0, 00:20:00.629 "dhchap_digests": [ 00:20:00.629 "sha256", 00:20:00.630 "sha384", 00:20:00.630 "sha512" 00:20:00.630 ], 00:20:00.630 "dhchap_dhgroups": [ 00:20:00.630 "null", 00:20:00.630 "ffdhe2048", 00:20:00.630 "ffdhe3072", 00:20:00.630 "ffdhe4096", 00:20:00.630 "ffdhe6144", 00:20:00.630 "ffdhe8192" 00:20:00.630 ] 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "bdev_nvme_set_hotplug", 00:20:00.630 "params": { 00:20:00.630 "period_us": 100000, 00:20:00.630 "enable": false 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "bdev_malloc_create", 00:20:00.630 "params": { 00:20:00.630 "name": "malloc0", 00:20:00.630 "num_blocks": 8192, 00:20:00.630 "block_size": 4096, 00:20:00.630 "physical_block_size": 4096, 00:20:00.630 "uuid": "949af5fa-115e-4fcf-a2c1-669f9000d54f", 00:20:00.630 "optimal_io_boundary": 0 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "bdev_wait_for_examine" 00:20:00.630 } 00:20:00.630 ] 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "subsystem": "nbd", 00:20:00.630 "config": [] 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "subsystem": "scheduler", 00:20:00.630 "config": [ 00:20:00.630 { 00:20:00.630 "method": "framework_set_scheduler", 00:20:00.630 "params": { 00:20:00.630 "name": "static" 00:20:00.630 } 00:20:00.630 } 00:20:00.630 ] 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "subsystem": "nvmf", 00:20:00.630 "config": [ 00:20:00.630 { 00:20:00.630 "method": "nvmf_set_config", 00:20:00.630 "params": { 00:20:00.630 "discovery_filter": "match_any", 00:20:00.630 "admin_cmd_passthru": { 00:20:00.630 "identify_ctrlr": false 00:20:00.630 } 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "nvmf_set_max_subsystems", 00:20:00.630 "params": { 00:20:00.630 "max_subsystems": 1024 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "nvmf_set_crdt", 00:20:00.630 "params": { 00:20:00.630 "crdt1": 0, 00:20:00.630 "crdt2": 0, 00:20:00.630 "crdt3": 0 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "nvmf_create_transport", 00:20:00.630 "params": { 00:20:00.630 "trtype": "TCP", 00:20:00.630 "max_queue_depth": 128, 00:20:00.630 "max_io_qpairs_per_ctrlr": 127, 00:20:00.630 "in_capsule_data_size": 4096, 00:20:00.630 "max_io_size": 131072, 00:20:00.630 "io_unit_size": 131072, 00:20:00.630 "max_aq_depth": 128, 00:20:00.630 "num_shared_buffers": 511, 00:20:00.630 "buf_cache_size": 4294967295, 00:20:00.630 "dif_insert_or_strip": false, 00:20:00.630 "zcopy": false, 00:20:00.630 "c2h_success": false, 00:20:00.630 "sock_priority": 0, 00:20:00.630 "abort_timeout_sec": 1, 00:20:00.630 "ack_timeout": 0, 00:20:00.630 "data_wr_pool_size": 0 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "nvmf_create_subsystem", 00:20:00.630 "params": { 00:20:00.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.630 "allow_any_host": false, 00:20:00.630 "serial_number": "00000000000000000000", 00:20:00.630 "model_number": "SPDK bdev Controller", 00:20:00.630 "max_namespaces": 32, 00:20:00.630 "min_cntlid": 1, 00:20:00.630 "max_cntlid": 65519, 00:20:00.630 "ana_reporting": false 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "nvmf_subsystem_add_host", 00:20:00.630 "params": { 00:20:00.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.630 "host": "nqn.2016-06.io.spdk:host1", 00:20:00.630 "psk": "key0" 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "nvmf_subsystem_add_ns", 00:20:00.630 "params": { 00:20:00.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.630 "namespace": { 00:20:00.630 "nsid": 1, 00:20:00.630 "bdev_name": "malloc0", 00:20:00.630 "nguid": "949AF5FA115E4FCFA2C1669F9000D54F", 00:20:00.630 "uuid": "949af5fa-115e-4fcf-a2c1-669f9000d54f", 00:20:00.630 "no_auto_visible": false 00:20:00.630 } 00:20:00.630 } 00:20:00.630 }, 00:20:00.630 { 00:20:00.630 "method": "nvmf_subsystem_add_listener", 00:20:00.630 "params": { 00:20:00.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.630 "listen_address": { 00:20:00.630 "trtype": "TCP", 00:20:00.630 "adrfam": "IPv4", 00:20:00.630 "traddr": "10.0.0.2", 00:20:00.630 "trsvcid": "4420" 00:20:00.630 }, 00:20:00.630 "secure_channel": false, 00:20:00.630 "sock_impl": "ssl" 00:20:00.630 } 00:20:00.630 } 00:20:00.630 ] 00:20:00.630 } 00:20:00.630 ] 00:20:00.630 }' 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1043708 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1043708 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1043708 ']' 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:00.630 23:45:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.630 [2024-07-15 23:45:49.450407] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:00.630 [2024-07-15 23:45:49.450453] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.630 [2024-07-15 23:45:49.506548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.630 [2024-07-15 23:45:49.584934] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.630 [2024-07-15 23:45:49.584968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.630 [2024-07-15 23:45:49.584975] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.630 [2024-07-15 23:45:49.584981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.630 [2024-07-15 23:45:49.584986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.630 [2024-07-15 23:45:49.585041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.889 [2024-07-15 23:45:49.796160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.889 [2024-07-15 23:45:49.828191] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:00.889 [2024-07-15 23:45:49.839573] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1043832 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1043832 /var/tmp/bdevperf.sock 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@823 -- # '[' -z 1043832 ']' 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:01.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:01.456 23:45:50 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:01.456 "subsystems": [ 00:20:01.456 { 00:20:01.456 "subsystem": "keyring", 00:20:01.456 "config": [ 00:20:01.456 { 00:20:01.456 "method": "keyring_file_add_key", 00:20:01.456 "params": { 00:20:01.456 "name": "key0", 00:20:01.456 "path": "/tmp/tmp.Uq5xADRNMg" 00:20:01.456 } 00:20:01.456 } 00:20:01.456 ] 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "subsystem": "iobuf", 00:20:01.456 "config": [ 00:20:01.456 { 00:20:01.456 "method": "iobuf_set_options", 00:20:01.456 "params": { 00:20:01.456 "small_pool_count": 8192, 00:20:01.456 "large_pool_count": 1024, 00:20:01.456 "small_bufsize": 8192, 00:20:01.456 "large_bufsize": 135168 00:20:01.456 } 00:20:01.456 } 00:20:01.456 ] 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "subsystem": "sock", 00:20:01.456 "config": [ 00:20:01.456 { 00:20:01.456 "method": "sock_set_default_impl", 00:20:01.456 "params": { 00:20:01.456 "impl_name": "posix" 00:20:01.456 } 00:20:01.456 }, 00:20:01.456 { 00:20:01.456 "method": "sock_impl_set_options", 00:20:01.456 "params": { 00:20:01.456 "impl_name": "ssl", 00:20:01.456 "recv_buf_size": 4096, 00:20:01.456 "send_buf_size": 4096, 00:20:01.456 "enable_recv_pipe": true, 00:20:01.456 "enable_quickack": false, 00:20:01.456 "enable_placement_id": 0, 00:20:01.456 "enable_zerocopy_send_server": true, 00:20:01.456 "enable_zerocopy_send_client": false, 00:20:01.456 "zerocopy_threshold": 0, 00:20:01.457 "tls_version": 0, 00:20:01.457 "enable_ktls": false 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "sock_impl_set_options", 00:20:01.457 "params": { 00:20:01.457 "impl_name": "posix", 00:20:01.457 "recv_buf_size": 2097152, 00:20:01.457 "send_buf_size": 2097152, 00:20:01.457 "enable_recv_pipe": true, 00:20:01.457 "enable_quickack": false, 00:20:01.457 "enable_placement_id": 0, 00:20:01.457 "enable_zerocopy_send_server": true, 00:20:01.457 "enable_zerocopy_send_client": false, 00:20:01.457 "zerocopy_threshold": 0, 00:20:01.457 "tls_version": 0, 00:20:01.457 "enable_ktls": false 00:20:01.457 } 00:20:01.457 } 00:20:01.457 ] 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "subsystem": "vmd", 00:20:01.457 "config": [] 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "subsystem": "accel", 00:20:01.457 "config": [ 00:20:01.457 { 00:20:01.457 "method": "accel_set_options", 00:20:01.457 "params": { 00:20:01.457 "small_cache_size": 128, 00:20:01.457 "large_cache_size": 16, 00:20:01.457 "task_count": 2048, 00:20:01.457 "sequence_count": 2048, 00:20:01.457 "buf_count": 2048 00:20:01.457 } 00:20:01.457 } 00:20:01.457 ] 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "subsystem": "bdev", 00:20:01.457 "config": [ 00:20:01.457 { 00:20:01.457 "method": "bdev_set_options", 00:20:01.457 "params": { 00:20:01.457 "bdev_io_pool_size": 65535, 00:20:01.457 "bdev_io_cache_size": 256, 00:20:01.457 "bdev_auto_examine": true, 00:20:01.457 "iobuf_small_cache_size": 128, 00:20:01.457 "iobuf_large_cache_size": 16 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "bdev_raid_set_options", 00:20:01.457 "params": { 00:20:01.457 "process_window_size_kb": 1024 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "bdev_iscsi_set_options", 00:20:01.457 "params": { 00:20:01.457 "timeout_sec": 30 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "bdev_nvme_set_options", 00:20:01.457 "params": { 00:20:01.457 "action_on_timeout": "none", 00:20:01.457 "timeout_us": 0, 00:20:01.457 "timeout_admin_us": 0, 00:20:01.457 "keep_alive_timeout_ms": 10000, 00:20:01.457 "arbitration_burst": 0, 00:20:01.457 "low_priority_weight": 0, 00:20:01.457 "medium_priority_weight": 0, 00:20:01.457 "high_priority_weight": 0, 00:20:01.457 "nvme_adminq_poll_period_us": 10000, 00:20:01.457 "nvme_ioq_poll_period_us": 0, 00:20:01.457 "io_queue_requests": 512, 00:20:01.457 "delay_cmd_submit": true, 00:20:01.457 "transport_retry_count": 4, 00:20:01.457 "bdev_retry_count": 3, 00:20:01.457 "transport_ack_timeout": 0, 00:20:01.457 "ctrlr_loss_timeout_sec": 0, 00:20:01.457 "reconnect_delay_sec": 0, 00:20:01.457 "fast_io_fail_timeout_sec": 0, 00:20:01.457 "disable_auto_failback": false, 00:20:01.457 "generate_uuids": false, 00:20:01.457 "transport_tos": 0, 00:20:01.457 "nvme_error_stat": false, 00:20:01.457 "rdma_srq_size": 0, 00:20:01.457 "io_path_stat": false, 00:20:01.457 "allow_accel_sequence": false, 00:20:01.457 "rdma_max_cq_size": 0, 00:20:01.457 "rdma_cm_event_timeout_ms": 0, 00:20:01.457 "dhchap_digests": [ 00:20:01.457 "sha256", 00:20:01.457 "sha384", 00:20:01.457 "sha512" 00:20:01.457 ], 00:20:01.457 "dhchap_dhgroups": [ 00:20:01.457 "null", 00:20:01.457 "ffdhe2048", 00:20:01.457 "ffdhe3072", 00:20:01.457 "ffdhe4096", 00:20:01.457 "ffdhe6144", 00:20:01.457 "ffdhe8192" 00:20:01.457 ] 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "bdev_nvme_attach_controller", 00:20:01.457 "params": { 00:20:01.457 "name": "nvme0", 00:20:01.457 "trtype": "TCP", 00:20:01.457 "adrfam": "IPv4", 00:20:01.457 "traddr": "10.0.0.2", 00:20:01.457 "trsvcid": "4420", 00:20:01.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.457 "prchk_reftag": false, 00:20:01.457 "prchk_guard": false, 00:20:01.457 "ctrlr_loss_timeout_sec": 0, 00:20:01.457 "reconnect_delay_sec": 0, 00:20:01.457 "fast_io_fail_timeout_sec": 0, 00:20:01.457 "psk": "key0", 00:20:01.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.457 "hdgst": false, 00:20:01.457 "ddgst": false 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "bdev_nvme_set_hotplug", 00:20:01.457 "params": { 00:20:01.457 "period_us": 100000, 00:20:01.457 "enable": false 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "bdev_enable_histogram", 00:20:01.457 "params": { 00:20:01.457 "name": "nvme0n1", 00:20:01.457 "enable": true 00:20:01.457 } 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "method": "bdev_wait_for_examine" 00:20:01.457 } 00:20:01.457 ] 00:20:01.457 }, 00:20:01.457 { 00:20:01.457 "subsystem": "nbd", 00:20:01.457 "config": [] 00:20:01.457 } 00:20:01.457 ] 00:20:01.457 }' 00:20:01.457 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:01.457 23:45:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.457 [2024-07-15 23:45:50.329658] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:01.457 [2024-07-15 23:45:50.329707] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1043832 ] 00:20:01.457 [2024-07-15 23:45:50.385278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.716 [2024-07-15 23:45:50.462601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:01.716 [2024-07-15 23:45:50.614584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.283 23:45:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:02.283 23:45:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # return 0 00:20:02.283 23:45:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:02.283 23:45:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:02.542 23:45:51 nvmf_tcp.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.542 23:45:51 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:02.542 Running I/O for 1 seconds... 00:20:03.478 00:20:03.478 Latency(us) 00:20:03.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.478 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:03.478 Verification LBA range: start 0x0 length 0x2000 00:20:03.478 nvme0n1 : 1.02 4657.36 18.19 0.00 0.00 27236.62 5584.81 73400.32 00:20:03.478 =================================================================================================================== 00:20:03.478 Total : 4657.36 18.19 0.00 0.00 27236.62 5584.81 73400.32 00:20:03.478 0 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@800 -- # type=--id 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@801 -- # id=0 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # for n in $shm_files 00:20:03.478 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:03.478 nvmf_trace.0 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@815 -- # return 0 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1043832 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1043832 ']' 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1043832 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1043832 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1043832' 00:20:03.737 killing process with pid 1043832 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1043832 00:20:03.737 Received shutdown signal, test time was about 1.000000 seconds 00:20:03.737 00:20:03.737 Latency(us) 00:20:03.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:03.737 =================================================================================================================== 00:20:03.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:03.737 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1043832 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.996 rmmod nvme_tcp 00:20:03.996 rmmod nvme_fabrics 00:20:03.996 rmmod nvme_keyring 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:03.996 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1043708 ']' 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1043708 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@942 -- # '[' -z 1043708 ']' 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # kill -0 1043708 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # uname 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1043708 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1043708' 00:20:03.997 killing process with pid 1043708 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@961 -- # kill 1043708 00:20:03.997 23:45:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # wait 1043708 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.256 23:45:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.157 23:45:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:06.157 23:45:55 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RN3CUi07sT /tmp/tmp.XuPGNy74mx /tmp/tmp.Uq5xADRNMg 00:20:06.157 00:20:06.157 real 1m24.063s 00:20:06.157 user 2m10.150s 00:20:06.157 sys 0m28.131s 00:20:06.157 23:45:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:06.157 23:45:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.157 ************************************ 00:20:06.157 END TEST nvmf_tls 00:20:06.157 ************************************ 00:20:06.157 23:45:55 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:06.157 23:45:55 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.157 23:45:55 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:06.157 23:45:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:06.157 23:45:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:06.415 ************************************ 00:20:06.415 START TEST nvmf_fips 00:20:06.415 ************************************ 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:06.415 * Looking for test storage... 00:20:06.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:06.415 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:06.416 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # local es=0 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@644 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@630 -- # local arg=openssl 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # type -t openssl 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # type -P openssl 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # arg=/usr/bin/openssl 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # [[ -x /usr/bin/openssl ]] 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # openssl md5 /dev/fd/62 00:20:06.674 Error setting digest 00:20:06.674 00C26753257F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:06.674 00C26753257F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@645 -- # es=1 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.674 23:45:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:11.939 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:11.939 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:11.939 Found net devices under 0000:86:00.0: cvl_0_0 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:11.939 Found net devices under 0000:86:00.1: cvl_0_1 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:11.939 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:11.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:20:11.940 00:20:11.940 --- 10.0.0.2 ping statistics --- 00:20:11.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.940 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:11.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:20:11.940 00:20:11.940 --- 10.0.0.1 ping statistics --- 00:20:11.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.940 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1047717 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1047717 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 1047717 ']' 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:11.940 23:46:00 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:11.940 [2024-07-15 23:46:00.746442] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:11.940 [2024-07-15 23:46:00.746488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.940 [2024-07-15 23:46:00.805086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.940 [2024-07-15 23:46:00.883527] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.940 [2024-07-15 23:46:00.883562] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.940 [2024-07-15 23:46:00.883569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.940 [2024-07-15 23:46:00.883575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.940 [2024-07-15 23:46:00.883580] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.940 [2024-07-15 23:46:00.883597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:12.913 [2024-07-15 23:46:01.727514] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.913 [2024-07-15 23:46:01.743516] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.913 [2024-07-15 23:46:01.743674] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.913 [2024-07-15 23:46:01.771831] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:12.913 malloc0 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1047882 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1047882 /var/tmp/bdevperf.sock 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@823 -- # '[' -z 1047882 ']' 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:12.913 23:46:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:13.207 [2024-07-15 23:46:01.855272] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:13.207 [2024-07-15 23:46:01.855321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047882 ] 00:20:13.207 [2024-07-15 23:46:01.908343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.207 [2024-07-15 23:46:01.985821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.775 23:46:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:13.775 23:46:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # return 0 00:20:13.775 23:46:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:14.033 [2024-07-15 23:46:02.789576] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:14.033 [2024-07-15 23:46:02.789654] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:14.033 TLSTESTn1 00:20:14.033 23:46:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.033 Running I/O for 10 seconds... 00:20:26.225 00:20:26.225 Latency(us) 00:20:26.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.225 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.225 Verification LBA range: start 0x0 length 0x2000 00:20:26.225 TLSTESTn1 : 10.02 5566.66 21.74 0.00 0.00 22955.28 7038.00 51972.90 00:20:26.225 =================================================================================================================== 00:20:26.225 Total : 5566.66 21.74 0.00 0.00 22955.28 7038.00 51972.90 00:20:26.225 0 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@800 -- # type=--id 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@801 -- # id=0 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # for n in $shm_files 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:26.225 nvmf_trace.0 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@815 -- # return 0 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1047882 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 1047882 ']' 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 1047882 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1047882 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1047882' 00:20:26.225 killing process with pid 1047882 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 1047882 00:20:26.225 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.225 00:20:26.225 Latency(us) 00:20:26.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.225 =================================================================================================================== 00:20:26.225 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.225 [2024-07-15 23:46:13.141222] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 1047882 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:26.225 rmmod nvme_tcp 00:20:26.225 rmmod nvme_fabrics 00:20:26.225 rmmod nvme_keyring 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1047717 ']' 00:20:26.225 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1047717 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@942 -- # '[' -z 1047717 ']' 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # kill -0 1047717 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # uname 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1047717 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1047717' 00:20:26.226 killing process with pid 1047717 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@961 -- # kill 1047717 00:20:26.226 [2024-07-15 23:46:13.446197] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # wait 1047717 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:26.226 23:46:13 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.793 23:46:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.793 23:46:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:26.793 00:20:26.793 real 0m20.540s 00:20:26.793 user 0m22.863s 00:20:26.793 sys 0m8.477s 00:20:26.793 23:46:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:26.793 23:46:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:26.793 ************************************ 00:20:26.793 END TEST nvmf_fips 00:20:26.793 ************************************ 00:20:26.793 23:46:15 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:20:26.793 23:46:15 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:26.793 23:46:15 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:26.794 23:46:15 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:26.794 23:46:15 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:26.794 23:46:15 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.794 23:46:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.068 23:46:20 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:32.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:32.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:32.069 Found net devices under 0000:86:00.0: cvl_0_0 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:32.069 Found net devices under 0000:86:00.1: cvl_0_1 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:32.069 23:46:20 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:32.069 23:46:20 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:32.069 23:46:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:32.069 23:46:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:32.069 ************************************ 00:20:32.069 START TEST nvmf_perf_adq 00:20:32.069 ************************************ 00:20:32.069 23:46:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:32.069 * Looking for test storage... 00:20:32.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:32.069 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.069 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:32.069 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.069 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.328 23:46:21 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:32.329 23:46:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.611 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:37.611 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.611 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.612 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.612 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:37.612 23:46:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:38.178 23:46:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:40.082 23:46:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:45.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:45.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:45.361 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:45.362 Found net devices under 0000:86:00.0: cvl_0_0 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:45.362 Found net devices under 0000:86:00.1: cvl_0_1 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:45.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:45.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:20:45.362 00:20:45.362 --- 10.0.0.2 ping statistics --- 00:20:45.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.362 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:45.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:45.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:20:45.362 00:20:45.362 --- 10.0.0.1 ping statistics --- 00:20:45.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:45.362 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1057565 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1057565 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 1057565 ']' 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:45.362 23:46:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.630 [2024-07-15 23:46:34.371298] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:45.630 [2024-07-15 23:46:34.371347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.630 [2024-07-15 23:46:34.429373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:45.630 [2024-07-15 23:46:34.515722] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.630 [2024-07-15 23:46:34.515753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.630 [2024-07-15 23:46:34.515760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.630 [2024-07-15 23:46:34.515767] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.630 [2024-07-15 23:46:34.515772] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.630 [2024-07-15 23:46:34.515821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.630 [2024-07-15 23:46:34.515918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.630 [2024-07-15 23:46:34.515992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:45.630 [2024-07-15 23:46:34.515993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.261 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.521 [2024-07-15 23:46:35.356866] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.521 Malloc1 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.521 [2024-07-15 23:46:35.408746] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1057818 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:46.521 23:46:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:49.056 23:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:49.056 23:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:49.056 23:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:49.056 23:46:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:49.056 23:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:49.056 "tick_rate": 2300000000, 00:20:49.056 "poll_groups": [ 00:20:49.056 { 00:20:49.056 "name": "nvmf_tgt_poll_group_000", 00:20:49.056 "admin_qpairs": 1, 00:20:49.056 "io_qpairs": 1, 00:20:49.056 "current_admin_qpairs": 1, 00:20:49.056 "current_io_qpairs": 1, 00:20:49.056 "pending_bdev_io": 0, 00:20:49.056 "completed_nvme_io": 20301, 00:20:49.056 "transports": [ 00:20:49.056 { 00:20:49.056 "trtype": "TCP" 00:20:49.056 } 00:20:49.056 ] 00:20:49.056 }, 00:20:49.056 { 00:20:49.056 "name": "nvmf_tgt_poll_group_001", 00:20:49.056 "admin_qpairs": 0, 00:20:49.056 "io_qpairs": 1, 00:20:49.056 "current_admin_qpairs": 0, 00:20:49.056 "current_io_qpairs": 1, 00:20:49.056 "pending_bdev_io": 0, 00:20:49.056 "completed_nvme_io": 20507, 00:20:49.056 "transports": [ 00:20:49.056 { 00:20:49.056 "trtype": "TCP" 00:20:49.056 } 00:20:49.056 ] 00:20:49.056 }, 00:20:49.056 { 00:20:49.056 "name": "nvmf_tgt_poll_group_002", 00:20:49.056 "admin_qpairs": 0, 00:20:49.056 "io_qpairs": 1, 00:20:49.056 "current_admin_qpairs": 0, 00:20:49.056 "current_io_qpairs": 1, 00:20:49.056 "pending_bdev_io": 0, 00:20:49.056 "completed_nvme_io": 20312, 00:20:49.056 "transports": [ 00:20:49.056 { 00:20:49.056 "trtype": "TCP" 00:20:49.056 } 00:20:49.056 ] 00:20:49.056 }, 00:20:49.056 { 00:20:49.056 "name": "nvmf_tgt_poll_group_003", 00:20:49.056 "admin_qpairs": 0, 00:20:49.056 "io_qpairs": 1, 00:20:49.056 "current_admin_qpairs": 0, 00:20:49.056 "current_io_qpairs": 1, 00:20:49.056 "pending_bdev_io": 0, 00:20:49.057 "completed_nvme_io": 20307, 00:20:49.057 "transports": [ 00:20:49.057 { 00:20:49.057 "trtype": "TCP" 00:20:49.057 } 00:20:49.057 ] 00:20:49.057 } 00:20:49.057 ] 00:20:49.057 }' 00:20:49.057 23:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:49.057 23:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:49.057 23:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:49.057 23:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:49.057 23:46:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1057818 00:20:57.178 Initializing NVMe Controllers 00:20:57.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:57.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:57.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:57.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:57.178 Initialization complete. Launching workers. 00:20:57.178 ======================================================== 00:20:57.178 Latency(us) 00:20:57.178 Device Information : IOPS MiB/s Average min max 00:20:57.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10681.40 41.72 5993.64 1522.79 9686.03 00:20:57.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10856.70 42.41 5896.48 2035.96 9803.72 00:20:57.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10727.50 41.90 5966.72 1267.91 10088.13 00:20:57.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10700.90 41.80 5980.93 1501.85 9461.60 00:20:57.178 ======================================================== 00:20:57.178 Total : 42966.49 167.84 5959.20 1267.91 10088.13 00:20:57.178 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.178 rmmod nvme_tcp 00:20:57.178 rmmod nvme_fabrics 00:20:57.178 rmmod nvme_keyring 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1057565 ']' 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1057565 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 1057565 ']' 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 1057565 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1057565 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1057565' 00:20:57.178 killing process with pid 1057565 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 1057565 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 1057565 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.178 23:46:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.084 23:46:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:59.084 23:46:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:59.084 23:46:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:00.460 23:46:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:02.362 23:46:51 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:07.634 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:07.634 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:07.634 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.634 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:07.635 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:07.635 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:07.635 Found net devices under 0000:86:00.0: cvl_0_0 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:07.635 Found net devices under 0000:86:00.1: cvl_0_1 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:21:07.635 00:21:07.635 --- 10.0.0.2 ping statistics --- 00:21:07.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.635 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:21:07.635 00:21:07.635 --- 10.0.0.1 ping statistics --- 00:21:07.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.635 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:07.635 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:07.636 net.core.busy_poll = 1 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:07.636 net.core.busy_read = 1 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:07.636 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1061599 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1061599 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:07.894 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@823 -- # '[' -z 1061599 ']' 00:21:07.895 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.895 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:07.895 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.895 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:07.895 23:46:56 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.895 [2024-07-15 23:46:56.708836] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:07.895 [2024-07-15 23:46:56.708880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.895 [2024-07-15 23:46:56.769961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.895 [2024-07-15 23:46:56.849751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.895 [2024-07-15 23:46:56.849791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.895 [2024-07-15 23:46:56.849798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.895 [2024-07-15 23:46:56.849804] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.895 [2024-07-15 23:46:56.849809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.895 [2024-07-15 23:46:56.849857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.895 [2024-07-15 23:46:56.849877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.895 [2024-07-15 23:46:56.849960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.895 [2024-07-15 23:46:56.849962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # return 0 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 [2024-07-15 23:46:57.696097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 Malloc1 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:08.832 [2024-07-15 23:46:57.740015] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1061853 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:08.832 23:46:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:11.368 "tick_rate": 2300000000, 00:21:11.368 "poll_groups": [ 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_000", 00:21:11.368 "admin_qpairs": 1, 00:21:11.368 "io_qpairs": 1, 00:21:11.368 "current_admin_qpairs": 1, 00:21:11.368 "current_io_qpairs": 1, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 28325, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }, 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_001", 00:21:11.368 "admin_qpairs": 0, 00:21:11.368 "io_qpairs": 3, 00:21:11.368 "current_admin_qpairs": 0, 00:21:11.368 "current_io_qpairs": 3, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 31140, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }, 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_002", 00:21:11.368 "admin_qpairs": 0, 00:21:11.368 "io_qpairs": 0, 00:21:11.368 "current_admin_qpairs": 0, 00:21:11.368 "current_io_qpairs": 0, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 0, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }, 00:21:11.368 { 00:21:11.368 "name": "nvmf_tgt_poll_group_003", 00:21:11.368 "admin_qpairs": 0, 00:21:11.368 "io_qpairs": 0, 00:21:11.368 "current_admin_qpairs": 0, 00:21:11.368 "current_io_qpairs": 0, 00:21:11.368 "pending_bdev_io": 0, 00:21:11.368 "completed_nvme_io": 0, 00:21:11.368 "transports": [ 00:21:11.368 { 00:21:11.368 "trtype": "TCP" 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 } 00:21:11.368 ] 00:21:11.368 }' 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:11.368 23:46:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1061853 00:21:19.575 Initializing NVMe Controllers 00:21:19.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:19.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:19.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:19.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:19.575 Initialization complete. Launching workers. 00:21:19.575 ======================================================== 00:21:19.575 Latency(us) 00:21:19.575 Device Information : IOPS MiB/s Average min max 00:21:19.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5356.20 20.92 11956.46 1588.27 59058.77 00:21:19.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5651.90 22.08 11328.75 1757.96 57568.20 00:21:19.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14699.00 57.42 4355.17 1400.32 7157.73 00:21:19.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5061.90 19.77 12650.42 1974.06 57597.99 00:21:19.575 ======================================================== 00:21:19.575 Total : 30768.99 120.19 8324.03 1400.32 59058.77 00:21:19.575 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.575 rmmod nvme_tcp 00:21:19.575 rmmod nvme_fabrics 00:21:19.575 rmmod nvme_keyring 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1061599 ']' 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1061599 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@942 -- # '[' -z 1061599 ']' 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # kill -0 1061599 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # uname 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1061599 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1061599' 00:21:19.575 killing process with pid 1061599 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@961 -- # kill 1061599 00:21:19.575 23:47:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # wait 1061599 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.575 23:47:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.863 23:47:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:22.863 23:47:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:22.863 00:21:22.863 real 0m50.311s 00:21:22.863 user 2m48.964s 00:21:22.863 sys 0m9.202s 00:21:22.863 23:47:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:22.863 23:47:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:22.863 ************************************ 00:21:22.863 END TEST nvmf_perf_adq 00:21:22.863 ************************************ 00:21:22.863 23:47:11 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:22.863 23:47:11 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:22.863 23:47:11 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:22.863 23:47:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:22.863 23:47:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:22.863 ************************************ 00:21:22.863 START TEST nvmf_shutdown 00:21:22.863 ************************************ 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:22.863 * Looking for test storage... 00:21:22.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:22.863 23:47:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:22.864 ************************************ 00:21:22.864 START TEST nvmf_shutdown_tc1 00:21:22.864 ************************************ 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc1 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:22.864 23:47:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:28.141 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:28.141 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.141 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:28.142 Found net devices under 0000:86:00.0: cvl_0_0 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:28.142 Found net devices under 0000:86:00.1: cvl_0_1 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:28.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:21:28.142 00:21:28.142 --- 10.0.0.2 ping statistics --- 00:21:28.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.142 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:21:28.142 00:21:28.142 --- 10.0.0.1 ping statistics --- 00:21:28.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.142 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1067069 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1067069 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1067069 ']' 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.142 23:47:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:28.142 [2024-07-15 23:47:16.642205] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:28.142 [2024-07-15 23:47:16.642257] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.142 [2024-07-15 23:47:16.701043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:28.142 [2024-07-15 23:47:16.781517] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.142 [2024-07-15 23:47:16.781554] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.142 [2024-07-15 23:47:16.781561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.142 [2024-07-15 23:47:16.781571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.142 [2024-07-15 23:47:16.781576] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.142 [2024-07-15 23:47:16.781624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.142 [2024-07-15 23:47:16.781710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.142 [2024-07-15 23:47:16.781819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.142 [2024-07-15 23:47:16.781820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.711 [2024-07-15 23:47:17.483273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:28.711 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:28.712 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:28.712 Malloc1 00:21:28.712 [2024-07-15 23:47:17.578983] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.712 Malloc2 00:21:28.712 Malloc3 00:21:28.712 Malloc4 00:21:28.973 Malloc5 00:21:28.973 Malloc6 00:21:28.973 Malloc7 00:21:28.973 Malloc8 00:21:28.973 Malloc9 00:21:28.973 Malloc10 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1067347 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1067347 /var/tmp/bdevperf.sock 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1067347 ']' 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:29.233 23:47:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.233 { 00:21:29.233 "params": { 00:21:29.233 "name": "Nvme$subsystem", 00:21:29.233 "trtype": "$TEST_TRANSPORT", 00:21:29.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.233 "adrfam": "ipv4", 00:21:29.233 "trsvcid": "$NVMF_PORT", 00:21:29.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.233 "hdgst": ${hdgst:-false}, 00:21:29.233 "ddgst": ${ddgst:-false} 00:21:29.233 }, 00:21:29.233 "method": "bdev_nvme_attach_controller" 00:21:29.233 } 00:21:29.233 EOF 00:21:29.233 )") 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.233 { 00:21:29.233 "params": { 00:21:29.233 "name": "Nvme$subsystem", 00:21:29.233 "trtype": "$TEST_TRANSPORT", 00:21:29.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.233 "adrfam": "ipv4", 00:21:29.233 "trsvcid": "$NVMF_PORT", 00:21:29.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.233 "hdgst": ${hdgst:-false}, 00:21:29.233 "ddgst": ${ddgst:-false} 00:21:29.233 }, 00:21:29.233 "method": "bdev_nvme_attach_controller" 00:21:29.233 } 00:21:29.233 EOF 00:21:29.233 )") 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.233 { 00:21:29.233 "params": { 00:21:29.233 "name": "Nvme$subsystem", 00:21:29.233 "trtype": "$TEST_TRANSPORT", 00:21:29.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.233 "adrfam": "ipv4", 00:21:29.233 "trsvcid": "$NVMF_PORT", 00:21:29.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.233 "hdgst": ${hdgst:-false}, 00:21:29.233 "ddgst": ${ddgst:-false} 00:21:29.233 }, 00:21:29.233 "method": "bdev_nvme_attach_controller" 00:21:29.233 } 00:21:29.233 EOF 00:21:29.233 )") 00:21:29.233 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.234 { 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme$subsystem", 00:21:29.234 "trtype": "$TEST_TRANSPORT", 00:21:29.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "$NVMF_PORT", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.234 "hdgst": ${hdgst:-false}, 00:21:29.234 "ddgst": ${ddgst:-false} 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 } 00:21:29.234 EOF 00:21:29.234 )") 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.234 { 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme$subsystem", 00:21:29.234 "trtype": "$TEST_TRANSPORT", 00:21:29.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "$NVMF_PORT", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.234 "hdgst": ${hdgst:-false}, 00:21:29.234 "ddgst": ${ddgst:-false} 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 } 00:21:29.234 EOF 00:21:29.234 )") 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.234 { 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme$subsystem", 00:21:29.234 "trtype": "$TEST_TRANSPORT", 00:21:29.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "$NVMF_PORT", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.234 "hdgst": ${hdgst:-false}, 00:21:29.234 "ddgst": ${ddgst:-false} 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 } 00:21:29.234 EOF 00:21:29.234 )") 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.234 { 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme$subsystem", 00:21:29.234 "trtype": "$TEST_TRANSPORT", 00:21:29.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "$NVMF_PORT", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.234 "hdgst": ${hdgst:-false}, 00:21:29.234 "ddgst": ${ddgst:-false} 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 } 00:21:29.234 EOF 00:21:29.234 )") 00:21:29.234 [2024-07-15 23:47:18.042002] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:29.234 [2024-07-15 23:47:18.042051] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.234 { 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme$subsystem", 00:21:29.234 "trtype": "$TEST_TRANSPORT", 00:21:29.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "$NVMF_PORT", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.234 "hdgst": ${hdgst:-false}, 00:21:29.234 "ddgst": ${ddgst:-false} 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 } 00:21:29.234 EOF 00:21:29.234 )") 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.234 { 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme$subsystem", 00:21:29.234 "trtype": "$TEST_TRANSPORT", 00:21:29.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "$NVMF_PORT", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.234 "hdgst": ${hdgst:-false}, 00:21:29.234 "ddgst": ${ddgst:-false} 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 } 00:21:29.234 EOF 00:21:29.234 )") 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:29.234 { 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme$subsystem", 00:21:29.234 "trtype": "$TEST_TRANSPORT", 00:21:29.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "$NVMF_PORT", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:29.234 "hdgst": ${hdgst:-false}, 00:21:29.234 "ddgst": ${ddgst:-false} 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 } 00:21:29.234 EOF 00:21:29.234 )") 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:29.234 23:47:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme1", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:29.234 "hdgst": false, 00:21:29.234 "ddgst": false 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 },{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme2", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:29.234 "hdgst": false, 00:21:29.234 "ddgst": false 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 },{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme3", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:29.234 "hdgst": false, 00:21:29.234 "ddgst": false 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 },{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme4", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:29.234 "hdgst": false, 00:21:29.234 "ddgst": false 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 },{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme5", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:29.234 "hdgst": false, 00:21:29.234 "ddgst": false 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 },{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme6", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:29.234 "hdgst": false, 00:21:29.234 "ddgst": false 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 },{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme7", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:29.234 "hdgst": false, 00:21:29.234 "ddgst": false 00:21:29.234 }, 00:21:29.234 "method": "bdev_nvme_attach_controller" 00:21:29.234 },{ 00:21:29.234 "params": { 00:21:29.234 "name": "Nvme8", 00:21:29.234 "trtype": "tcp", 00:21:29.234 "traddr": "10.0.0.2", 00:21:29.234 "adrfam": "ipv4", 00:21:29.234 "trsvcid": "4420", 00:21:29.234 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:29.234 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:29.235 "hdgst": false, 00:21:29.235 "ddgst": false 00:21:29.235 }, 00:21:29.235 "method": "bdev_nvme_attach_controller" 00:21:29.235 },{ 00:21:29.235 "params": { 00:21:29.235 "name": "Nvme9", 00:21:29.235 "trtype": "tcp", 00:21:29.235 "traddr": "10.0.0.2", 00:21:29.235 "adrfam": "ipv4", 00:21:29.235 "trsvcid": "4420", 00:21:29.235 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:29.235 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:29.235 "hdgst": false, 00:21:29.235 "ddgst": false 00:21:29.235 }, 00:21:29.235 "method": "bdev_nvme_attach_controller" 00:21:29.235 },{ 00:21:29.235 "params": { 00:21:29.235 "name": "Nvme10", 00:21:29.235 "trtype": "tcp", 00:21:29.235 "traddr": "10.0.0.2", 00:21:29.235 "adrfam": "ipv4", 00:21:29.235 "trsvcid": "4420", 00:21:29.235 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:29.235 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:29.235 "hdgst": false, 00:21:29.235 "ddgst": false 00:21:29.235 }, 00:21:29.235 "method": "bdev_nvme_attach_controller" 00:21:29.235 }' 00:21:29.235 [2024-07-15 23:47:18.099619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.235 [2024-07-15 23:47:18.174624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1067347 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:30.614 23:47:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:31.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1067347 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1067069 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.551 { 00:21:31.551 "params": { 00:21:31.551 "name": "Nvme$subsystem", 00:21:31.551 "trtype": "$TEST_TRANSPORT", 00:21:31.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.551 "adrfam": "ipv4", 00:21:31.551 "trsvcid": "$NVMF_PORT", 00:21:31.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.551 "hdgst": ${hdgst:-false}, 00:21:31.551 "ddgst": ${ddgst:-false} 00:21:31.551 }, 00:21:31.551 "method": "bdev_nvme_attach_controller" 00:21:31.551 } 00:21:31.551 EOF 00:21:31.551 )") 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.551 { 00:21:31.551 "params": { 00:21:31.551 "name": "Nvme$subsystem", 00:21:31.551 "trtype": "$TEST_TRANSPORT", 00:21:31.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.551 "adrfam": "ipv4", 00:21:31.551 "trsvcid": "$NVMF_PORT", 00:21:31.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.551 "hdgst": ${hdgst:-false}, 00:21:31.551 "ddgst": ${ddgst:-false} 00:21:31.551 }, 00:21:31.551 "method": "bdev_nvme_attach_controller" 00:21:31.551 } 00:21:31.551 EOF 00:21:31.551 )") 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.551 { 00:21:31.551 "params": { 00:21:31.551 "name": "Nvme$subsystem", 00:21:31.551 "trtype": "$TEST_TRANSPORT", 00:21:31.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.551 "adrfam": "ipv4", 00:21:31.551 "trsvcid": "$NVMF_PORT", 00:21:31.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.551 "hdgst": ${hdgst:-false}, 00:21:31.551 "ddgst": ${ddgst:-false} 00:21:31.551 }, 00:21:31.551 "method": "bdev_nvme_attach_controller" 00:21:31.551 } 00:21:31.551 EOF 00:21:31.551 )") 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.551 { 00:21:31.551 "params": { 00:21:31.551 "name": "Nvme$subsystem", 00:21:31.551 "trtype": "$TEST_TRANSPORT", 00:21:31.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.551 "adrfam": "ipv4", 00:21:31.551 "trsvcid": "$NVMF_PORT", 00:21:31.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.551 "hdgst": ${hdgst:-false}, 00:21:31.551 "ddgst": ${ddgst:-false} 00:21:31.551 }, 00:21:31.551 "method": "bdev_nvme_attach_controller" 00:21:31.551 } 00:21:31.551 EOF 00:21:31.551 )") 00:21:31.551 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.811 [2024-07-15 23:47:20.525713] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:31.811 [2024-07-15 23:47:20.525761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1067834 ] 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.811 { 00:21:31.811 "params": { 00:21:31.811 "name": "Nvme$subsystem", 00:21:31.811 "trtype": "$TEST_TRANSPORT", 00:21:31.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.811 "adrfam": "ipv4", 00:21:31.811 "trsvcid": "$NVMF_PORT", 00:21:31.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.811 "hdgst": ${hdgst:-false}, 00:21:31.811 "ddgst": ${ddgst:-false} 00:21:31.811 }, 00:21:31.811 "method": "bdev_nvme_attach_controller" 00:21:31.811 } 00:21:31.811 EOF 00:21:31.811 )") 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.811 { 00:21:31.811 "params": { 00:21:31.811 "name": "Nvme$subsystem", 00:21:31.811 "trtype": "$TEST_TRANSPORT", 00:21:31.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.811 "adrfam": "ipv4", 00:21:31.811 "trsvcid": "$NVMF_PORT", 00:21:31.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.811 "hdgst": ${hdgst:-false}, 00:21:31.811 "ddgst": ${ddgst:-false} 00:21:31.811 }, 00:21:31.811 "method": "bdev_nvme_attach_controller" 00:21:31.811 } 00:21:31.811 EOF 00:21:31.811 )") 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.811 { 00:21:31.811 "params": { 00:21:31.811 "name": "Nvme$subsystem", 00:21:31.811 "trtype": "$TEST_TRANSPORT", 00:21:31.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.811 "adrfam": "ipv4", 00:21:31.811 "trsvcid": "$NVMF_PORT", 00:21:31.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.811 "hdgst": ${hdgst:-false}, 00:21:31.811 "ddgst": ${ddgst:-false} 00:21:31.811 }, 00:21:31.811 "method": "bdev_nvme_attach_controller" 00:21:31.811 } 00:21:31.811 EOF 00:21:31.811 )") 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.811 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.811 { 00:21:31.811 "params": { 00:21:31.811 "name": "Nvme$subsystem", 00:21:31.811 "trtype": "$TEST_TRANSPORT", 00:21:31.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.811 "adrfam": "ipv4", 00:21:31.811 "trsvcid": "$NVMF_PORT", 00:21:31.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.811 "hdgst": ${hdgst:-false}, 00:21:31.811 "ddgst": ${ddgst:-false} 00:21:31.811 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 } 00:21:31.812 EOF 00:21:31.812 )") 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.812 { 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme$subsystem", 00:21:31.812 "trtype": "$TEST_TRANSPORT", 00:21:31.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "$NVMF_PORT", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.812 "hdgst": ${hdgst:-false}, 00:21:31.812 "ddgst": ${ddgst:-false} 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 } 00:21:31.812 EOF 00:21:31.812 )") 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:31.812 { 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme$subsystem", 00:21:31.812 "trtype": "$TEST_TRANSPORT", 00:21:31.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "$NVMF_PORT", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:31.812 "hdgst": ${hdgst:-false}, 00:21:31.812 "ddgst": ${ddgst:-false} 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 } 00:21:31.812 EOF 00:21:31.812 )") 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:31.812 23:47:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme1", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme2", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme3", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme4", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme5", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme6", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme7", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme8", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme9", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 },{ 00:21:31.812 "params": { 00:21:31.812 "name": "Nvme10", 00:21:31.812 "trtype": "tcp", 00:21:31.812 "traddr": "10.0.0.2", 00:21:31.812 "adrfam": "ipv4", 00:21:31.812 "trsvcid": "4420", 00:21:31.812 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:31.812 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:31.812 "hdgst": false, 00:21:31.812 "ddgst": false 00:21:31.812 }, 00:21:31.812 "method": "bdev_nvme_attach_controller" 00:21:31.812 }' 00:21:31.812 [2024-07-15 23:47:20.581565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.812 [2024-07-15 23:47:20.656070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.719 Running I/O for 1 seconds... 00:21:34.675 00:21:34.675 Latency(us) 00:21:34.675 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.675 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.675 Verification LBA range: start 0x0 length 0x400 00:21:34.675 Nvme1n1 : 1.08 238.11 14.88 0.00 0.00 265872.03 19375.86 227039.50 00:21:34.675 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.675 Verification LBA range: start 0x0 length 0x400 00:21:34.675 Nvme2n1 : 1.17 272.52 17.03 0.00 0.00 228353.07 17894.18 213362.42 00:21:34.675 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.675 Verification LBA range: start 0x0 length 0x400 00:21:34.675 Nvme3n1 : 1.18 270.12 16.88 0.00 0.00 226333.92 18464.06 235245.75 00:21:34.675 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.675 Verification LBA range: start 0x0 length 0x400 00:21:34.675 Nvme4n1 : 1.08 237.14 14.82 0.00 0.00 251867.71 18578.03 217921.45 00:21:34.675 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.675 Verification LBA range: start 0x0 length 0x400 00:21:34.675 Nvme5n1 : 1.19 269.96 16.87 0.00 0.00 218532.11 17894.18 218833.25 00:21:34.676 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.676 Verification LBA range: start 0x0 length 0x400 00:21:34.676 Nvme6n1 : 1.20 267.49 16.72 0.00 0.00 216730.40 18464.06 217921.45 00:21:34.676 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.676 Verification LBA range: start 0x0 length 0x400 00:21:34.676 Nvme7n1 : 1.19 326.58 20.41 0.00 0.00 173727.99 3362.28 213362.42 00:21:34.676 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.676 Verification LBA range: start 0x0 length 0x400 00:21:34.676 Nvme8n1 : 1.18 275.33 17.21 0.00 0.00 201574.38 6240.17 213362.42 00:21:34.676 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.676 Verification LBA range: start 0x0 length 0x400 00:21:34.676 Nvme9n1 : 1.20 270.51 16.91 0.00 0.00 202421.11 3419.27 225215.89 00:21:34.676 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.676 Verification LBA range: start 0x0 length 0x400 00:21:34.676 Nvme10n1 : 1.20 266.53 16.66 0.00 0.00 201744.38 15614.66 242540.19 00:21:34.676 =================================================================================================================== 00:21:34.676 Total : 2694.29 168.39 0.00 0.00 216055.30 3362.28 242540.19 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:34.934 rmmod nvme_tcp 00:21:34.934 rmmod nvme_fabrics 00:21:34.934 rmmod nvme_keyring 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1067069 ']' 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1067069 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@942 -- # '[' -z 1067069 ']' 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # kill -0 1067069 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # uname 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1067069 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1067069' 00:21:34.934 killing process with pid 1067069 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@961 -- # kill 1067069 00:21:34.934 23:47:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # wait 1067069 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.502 23:47:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.451 00:21:37.451 real 0m14.827s 00:21:37.451 user 0m35.375s 00:21:37.451 sys 0m5.131s 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 ************************************ 00:21:37.451 END TEST nvmf_shutdown_tc1 00:21:37.451 ************************************ 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 ************************************ 00:21:37.451 START TEST nvmf_shutdown_tc2 00:21:37.451 ************************************ 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc2 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:37.451 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:37.451 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:37.451 Found net devices under 0000:86:00.0: cvl_0_0 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:37.451 Found net devices under 0000:86:00.1: cvl_0_1 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.451 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.452 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:21:37.710 00:21:37.710 --- 10.0.0.2 ping statistics --- 00:21:37.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.710 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:21:37.710 00:21:37.710 --- 10.0.0.1 ping statistics --- 00:21:37.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.710 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.710 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1068962 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1068962 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1068962 ']' 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:37.970 23:47:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.970 [2024-07-15 23:47:26.747753] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:37.970 [2024-07-15 23:47:26.747796] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.970 [2024-07-15 23:47:26.806138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.970 [2024-07-15 23:47:26.886238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.970 [2024-07-15 23:47:26.886276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.970 [2024-07-15 23:47:26.886284] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.970 [2024-07-15 23:47:26.886291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.970 [2024-07-15 23:47:26.886300] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.970 [2024-07-15 23:47:26.886343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.970 [2024-07-15 23:47:26.886363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.970 [2024-07-15 23:47:26.886478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.970 [2024-07-15 23:47:26.886479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.908 [2024-07-15 23:47:27.594082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:38.908 23:47:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.908 Malloc1 00:21:38.908 [2024-07-15 23:47:27.685798] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.908 Malloc2 00:21:38.908 Malloc3 00:21:38.908 Malloc4 00:21:38.908 Malloc5 00:21:38.908 Malloc6 00:21:39.168 Malloc7 00:21:39.168 Malloc8 00:21:39.168 Malloc9 00:21:39.168 Malloc10 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1069253 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1069253 /var/tmp/bdevperf.sock 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1069253 ']' 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.168 { 00:21:39.168 "params": { 00:21:39.168 "name": "Nvme$subsystem", 00:21:39.168 "trtype": "$TEST_TRANSPORT", 00:21:39.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.168 "adrfam": "ipv4", 00:21:39.168 "trsvcid": "$NVMF_PORT", 00:21:39.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.168 "hdgst": ${hdgst:-false}, 00:21:39.168 "ddgst": ${ddgst:-false} 00:21:39.168 }, 00:21:39.168 "method": "bdev_nvme_attach_controller" 00:21:39.168 } 00:21:39.168 EOF 00:21:39.168 )") 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.168 { 00:21:39.168 "params": { 00:21:39.168 "name": "Nvme$subsystem", 00:21:39.168 "trtype": "$TEST_TRANSPORT", 00:21:39.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.168 "adrfam": "ipv4", 00:21:39.168 "trsvcid": "$NVMF_PORT", 00:21:39.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.168 "hdgst": ${hdgst:-false}, 00:21:39.168 "ddgst": ${ddgst:-false} 00:21:39.168 }, 00:21:39.168 "method": "bdev_nvme_attach_controller" 00:21:39.168 } 00:21:39.168 EOF 00:21:39.168 )") 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.168 { 00:21:39.168 "params": { 00:21:39.168 "name": "Nvme$subsystem", 00:21:39.168 "trtype": "$TEST_TRANSPORT", 00:21:39.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.168 "adrfam": "ipv4", 00:21:39.168 "trsvcid": "$NVMF_PORT", 00:21:39.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.168 "hdgst": ${hdgst:-false}, 00:21:39.168 "ddgst": ${ddgst:-false} 00:21:39.168 }, 00:21:39.168 "method": "bdev_nvme_attach_controller" 00:21:39.168 } 00:21:39.168 EOF 00:21:39.168 )") 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.168 { 00:21:39.168 "params": { 00:21:39.168 "name": "Nvme$subsystem", 00:21:39.168 "trtype": "$TEST_TRANSPORT", 00:21:39.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.168 "adrfam": "ipv4", 00:21:39.168 "trsvcid": "$NVMF_PORT", 00:21:39.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.168 "hdgst": ${hdgst:-false}, 00:21:39.168 "ddgst": ${ddgst:-false} 00:21:39.168 }, 00:21:39.168 "method": "bdev_nvme_attach_controller" 00:21:39.168 } 00:21:39.168 EOF 00:21:39.168 )") 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.168 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.169 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.169 { 00:21:39.169 "params": { 00:21:39.169 "name": "Nvme$subsystem", 00:21:39.169 "trtype": "$TEST_TRANSPORT", 00:21:39.169 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.169 "adrfam": "ipv4", 00:21:39.169 "trsvcid": "$NVMF_PORT", 00:21:39.169 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.169 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.169 "hdgst": ${hdgst:-false}, 00:21:39.169 "ddgst": ${ddgst:-false} 00:21:39.169 }, 00:21:39.169 "method": "bdev_nvme_attach_controller" 00:21:39.169 } 00:21:39.169 EOF 00:21:39.169 )") 00:21:39.169 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.428 { 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme$subsystem", 00:21:39.428 "trtype": "$TEST_TRANSPORT", 00:21:39.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "$NVMF_PORT", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.428 "hdgst": ${hdgst:-false}, 00:21:39.428 "ddgst": ${ddgst:-false} 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 } 00:21:39.428 EOF 00:21:39.428 )") 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.428 { 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme$subsystem", 00:21:39.428 "trtype": "$TEST_TRANSPORT", 00:21:39.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "$NVMF_PORT", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.428 "hdgst": ${hdgst:-false}, 00:21:39.428 "ddgst": ${ddgst:-false} 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 } 00:21:39.428 EOF 00:21:39.428 )") 00:21:39.428 [2024-07-15 23:47:28.152126] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:39.428 [2024-07-15 23:47:28.152174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1069253 ] 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.428 { 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme$subsystem", 00:21:39.428 "trtype": "$TEST_TRANSPORT", 00:21:39.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "$NVMF_PORT", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.428 "hdgst": ${hdgst:-false}, 00:21:39.428 "ddgst": ${ddgst:-false} 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 } 00:21:39.428 EOF 00:21:39.428 )") 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.428 { 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme$subsystem", 00:21:39.428 "trtype": "$TEST_TRANSPORT", 00:21:39.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "$NVMF_PORT", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.428 "hdgst": ${hdgst:-false}, 00:21:39.428 "ddgst": ${ddgst:-false} 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 } 00:21:39.428 EOF 00:21:39.428 )") 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.428 { 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme$subsystem", 00:21:39.428 "trtype": "$TEST_TRANSPORT", 00:21:39.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "$NVMF_PORT", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.428 "hdgst": ${hdgst:-false}, 00:21:39.428 "ddgst": ${ddgst:-false} 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 } 00:21:39.428 EOF 00:21:39.428 )") 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:39.428 23:47:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme1", 00:21:39.428 "trtype": "tcp", 00:21:39.428 "traddr": "10.0.0.2", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "4420", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.428 "hdgst": false, 00:21:39.428 "ddgst": false 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 },{ 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme2", 00:21:39.428 "trtype": "tcp", 00:21:39.428 "traddr": "10.0.0.2", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "4420", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:39.428 "hdgst": false, 00:21:39.428 "ddgst": false 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 },{ 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme3", 00:21:39.428 "trtype": "tcp", 00:21:39.428 "traddr": "10.0.0.2", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "4420", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:39.428 "hdgst": false, 00:21:39.428 "ddgst": false 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 },{ 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme4", 00:21:39.428 "trtype": "tcp", 00:21:39.428 "traddr": "10.0.0.2", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "4420", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:39.428 "hdgst": false, 00:21:39.428 "ddgst": false 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 },{ 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme5", 00:21:39.428 "trtype": "tcp", 00:21:39.428 "traddr": "10.0.0.2", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "4420", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:39.428 "hdgst": false, 00:21:39.428 "ddgst": false 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 },{ 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme6", 00:21:39.428 "trtype": "tcp", 00:21:39.428 "traddr": "10.0.0.2", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "4420", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:39.428 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:39.428 "hdgst": false, 00:21:39.428 "ddgst": false 00:21:39.428 }, 00:21:39.428 "method": "bdev_nvme_attach_controller" 00:21:39.428 },{ 00:21:39.428 "params": { 00:21:39.428 "name": "Nvme7", 00:21:39.428 "trtype": "tcp", 00:21:39.428 "traddr": "10.0.0.2", 00:21:39.428 "adrfam": "ipv4", 00:21:39.428 "trsvcid": "4420", 00:21:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:39.429 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:39.429 "hdgst": false, 00:21:39.429 "ddgst": false 00:21:39.429 }, 00:21:39.429 "method": "bdev_nvme_attach_controller" 00:21:39.429 },{ 00:21:39.429 "params": { 00:21:39.429 "name": "Nvme8", 00:21:39.429 "trtype": "tcp", 00:21:39.429 "traddr": "10.0.0.2", 00:21:39.429 "adrfam": "ipv4", 00:21:39.429 "trsvcid": "4420", 00:21:39.429 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:39.429 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:39.429 "hdgst": false, 00:21:39.429 "ddgst": false 00:21:39.429 }, 00:21:39.429 "method": "bdev_nvme_attach_controller" 00:21:39.429 },{ 00:21:39.429 "params": { 00:21:39.429 "name": "Nvme9", 00:21:39.429 "trtype": "tcp", 00:21:39.429 "traddr": "10.0.0.2", 00:21:39.429 "adrfam": "ipv4", 00:21:39.429 "trsvcid": "4420", 00:21:39.429 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:39.429 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:39.429 "hdgst": false, 00:21:39.429 "ddgst": false 00:21:39.429 }, 00:21:39.429 "method": "bdev_nvme_attach_controller" 00:21:39.429 },{ 00:21:39.429 "params": { 00:21:39.429 "name": "Nvme10", 00:21:39.429 "trtype": "tcp", 00:21:39.429 "traddr": "10.0.0.2", 00:21:39.429 "adrfam": "ipv4", 00:21:39.429 "trsvcid": "4420", 00:21:39.429 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:39.429 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:39.429 "hdgst": false, 00:21:39.429 "ddgst": false 00:21:39.429 }, 00:21:39.429 "method": "bdev_nvme_attach_controller" 00:21:39.429 }' 00:21:39.429 [2024-07-15 23:47:28.208513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.429 [2024-07-15 23:47:28.281542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.805 Running I/O for 10 seconds... 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.805 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:41.062 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:41.062 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:41.062 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:41.062 23:47:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1069253 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1069253 ']' 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1069253 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1069253 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1069253' 00:21:41.320 killing process with pid 1069253 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1069253 00:21:41.320 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1069253 00:21:41.320 Received shutdown signal, test time was about 0.653939 seconds 00:21:41.320 00:21:41.320 Latency(us) 00:21:41.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.320 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme1n1 : 0.61 314.90 19.68 0.00 0.00 199953.36 16412.49 210627.01 00:21:41.320 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme2n1 : 0.62 311.27 19.45 0.00 0.00 196747.20 18122.13 206979.78 00:21:41.320 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme3n1 : 0.65 293.91 18.37 0.00 0.00 190847.18 16526.47 215186.03 00:21:41.320 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme4n1 : 0.61 317.04 19.82 0.00 0.00 182621.35 15272.74 213362.42 00:21:41.320 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme5n1 : 0.62 310.92 19.43 0.00 0.00 181150.05 33736.79 194214.51 00:21:41.320 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme6n1 : 0.59 217.76 13.61 0.00 0.00 247625.91 19603.81 218833.25 00:21:41.320 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme7n1 : 0.59 218.55 13.66 0.00 0.00 240459.24 15614.66 196949.93 00:21:41.320 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme8n1 : 0.62 309.14 19.32 0.00 0.00 166757.06 16070.57 198773.54 00:21:41.320 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme9n1 : 0.60 214.58 13.41 0.00 0.00 230327.87 35788.35 223392.28 00:21:41.320 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:41.320 Verification LBA range: start 0x0 length 0x400 00:21:41.320 Nvme10n1 : 0.60 212.73 13.30 0.00 0.00 224865.50 19489.84 242540.19 00:21:41.320 =================================================================================================================== 00:21:41.320 Total : 2720.80 170.05 0.00 0.00 201568.68 15272.74 242540.19 00:21:41.578 23:47:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:42.513 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1068962 00:21:42.513 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:42.513 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:42.513 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:42.513 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.772 rmmod nvme_tcp 00:21:42.772 rmmod nvme_fabrics 00:21:42.772 rmmod nvme_keyring 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1068962 ']' 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1068962 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1068962 ']' 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1068962 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1068962 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1068962' 00:21:42.772 killing process with pid 1068962 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1068962 00:21:42.772 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1068962 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.031 23:47:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.571 00:21:45.571 real 0m7.649s 00:21:45.571 user 0m22.583s 00:21:45.571 sys 0m1.251s 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:45.571 ************************************ 00:21:45.571 END TEST nvmf_shutdown_tc2 00:21:45.571 ************************************ 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.571 ************************************ 00:21:45.571 START TEST nvmf_shutdown_tc3 00:21:45.571 ************************************ 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc3 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:45.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:45.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:45.571 Found net devices under 0000:86:00.0: cvl_0_0 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.571 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:45.572 Found net devices under 0000:86:00.1: cvl_0_1 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:45.572 00:21:45.572 --- 10.0.0.2 ping statistics --- 00:21:45.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.572 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:21:45.572 00:21:45.572 --- 10.0.0.1 ping statistics --- 00:21:45.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.572 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1070393 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1070393 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1070393 ']' 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:45.572 23:47:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.572 [2024-07-15 23:47:34.465822] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:45.572 [2024-07-15 23:47:34.465866] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.572 [2024-07-15 23:47:34.521390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.831 [2024-07-15 23:47:34.594174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.831 [2024-07-15 23:47:34.594214] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.831 [2024-07-15 23:47:34.594220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.831 [2024-07-15 23:47:34.594228] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.831 [2024-07-15 23:47:34.594233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.831 [2024-07-15 23:47:34.594345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.831 [2024-07-15 23:47:34.594433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.831 [2024-07-15 23:47:34.594518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.831 [2024-07-15 23:47:34.594519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.399 [2024-07-15 23:47:35.309134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:46.399 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.658 Malloc1 00:21:46.658 [2024-07-15 23:47:35.400805] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.658 Malloc2 00:21:46.658 Malloc3 00:21:46.658 Malloc4 00:21:46.658 Malloc5 00:21:46.658 Malloc6 00:21:46.658 Malloc7 00:21:46.919 Malloc8 00:21:46.919 Malloc9 00:21:46.919 Malloc10 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1070676 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1070676 /var/tmp/bdevperf.sock 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1070676 ']' 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 [2024-07-15 23:47:35.872047] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:46.919 [2024-07-15 23:47:35.872095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070676 ] 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.919 EOF 00:21:46.919 )") 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.919 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.919 { 00:21:46.919 "params": { 00:21:46.919 "name": "Nvme$subsystem", 00:21:46.919 "trtype": "$TEST_TRANSPORT", 00:21:46.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.919 "adrfam": "ipv4", 00:21:46.919 "trsvcid": "$NVMF_PORT", 00:21:46.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.919 "hdgst": ${hdgst:-false}, 00:21:46.919 "ddgst": ${ddgst:-false} 00:21:46.919 }, 00:21:46.919 "method": "bdev_nvme_attach_controller" 00:21:46.919 } 00:21:46.920 EOF 00:21:46.920 )") 00:21:46.920 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:46.920 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:46.920 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:46.920 { 00:21:46.920 "params": { 00:21:46.920 "name": "Nvme$subsystem", 00:21:46.920 "trtype": "$TEST_TRANSPORT", 00:21:46.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.920 "adrfam": "ipv4", 00:21:46.920 "trsvcid": "$NVMF_PORT", 00:21:46.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.920 "hdgst": ${hdgst:-false}, 00:21:46.920 "ddgst": ${ddgst:-false} 00:21:46.920 }, 00:21:46.920 "method": "bdev_nvme_attach_controller" 00:21:46.920 } 00:21:46.920 EOF 00:21:46.920 )") 00:21:46.920 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:47.179 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:47.179 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:47.179 23:47:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:47.179 "params": { 00:21:47.179 "name": "Nvme1", 00:21:47.179 "trtype": "tcp", 00:21:47.179 "traddr": "10.0.0.2", 00:21:47.179 "adrfam": "ipv4", 00:21:47.179 "trsvcid": "4420", 00:21:47.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.179 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.179 "hdgst": false, 00:21:47.179 "ddgst": false 00:21:47.179 }, 00:21:47.179 "method": "bdev_nvme_attach_controller" 00:21:47.179 },{ 00:21:47.179 "params": { 00:21:47.179 "name": "Nvme2", 00:21:47.179 "trtype": "tcp", 00:21:47.179 "traddr": "10.0.0.2", 00:21:47.179 "adrfam": "ipv4", 00:21:47.179 "trsvcid": "4420", 00:21:47.179 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.179 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:47.179 "hdgst": false, 00:21:47.179 "ddgst": false 00:21:47.179 }, 00:21:47.179 "method": "bdev_nvme_attach_controller" 00:21:47.179 },{ 00:21:47.179 "params": { 00:21:47.179 "name": "Nvme3", 00:21:47.179 "trtype": "tcp", 00:21:47.179 "traddr": "10.0.0.2", 00:21:47.179 "adrfam": "ipv4", 00:21:47.179 "trsvcid": "4420", 00:21:47.179 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:47.179 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:47.179 "hdgst": false, 00:21:47.179 "ddgst": false 00:21:47.179 }, 00:21:47.179 "method": "bdev_nvme_attach_controller" 00:21:47.180 },{ 00:21:47.180 "params": { 00:21:47.180 "name": "Nvme4", 00:21:47.180 "trtype": "tcp", 00:21:47.180 "traddr": "10.0.0.2", 00:21:47.180 "adrfam": "ipv4", 00:21:47.180 "trsvcid": "4420", 00:21:47.180 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:47.180 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:47.180 "hdgst": false, 00:21:47.180 "ddgst": false 00:21:47.180 }, 00:21:47.180 "method": "bdev_nvme_attach_controller" 00:21:47.180 },{ 00:21:47.180 "params": { 00:21:47.180 "name": "Nvme5", 00:21:47.180 "trtype": "tcp", 00:21:47.180 "traddr": "10.0.0.2", 00:21:47.180 "adrfam": "ipv4", 00:21:47.180 "trsvcid": "4420", 00:21:47.180 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:47.180 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:47.180 "hdgst": false, 00:21:47.180 "ddgst": false 00:21:47.180 }, 00:21:47.180 "method": "bdev_nvme_attach_controller" 00:21:47.180 },{ 00:21:47.180 "params": { 00:21:47.180 "name": "Nvme6", 00:21:47.180 "trtype": "tcp", 00:21:47.180 "traddr": "10.0.0.2", 00:21:47.180 "adrfam": "ipv4", 00:21:47.180 "trsvcid": "4420", 00:21:47.180 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:47.180 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:47.180 "hdgst": false, 00:21:47.180 "ddgst": false 00:21:47.180 }, 00:21:47.180 "method": "bdev_nvme_attach_controller" 00:21:47.180 },{ 00:21:47.180 "params": { 00:21:47.180 "name": "Nvme7", 00:21:47.180 "trtype": "tcp", 00:21:47.180 "traddr": "10.0.0.2", 00:21:47.180 "adrfam": "ipv4", 00:21:47.180 "trsvcid": "4420", 00:21:47.180 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:47.180 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:47.180 "hdgst": false, 00:21:47.180 "ddgst": false 00:21:47.180 }, 00:21:47.180 "method": "bdev_nvme_attach_controller" 00:21:47.180 },{ 00:21:47.180 "params": { 00:21:47.180 "name": "Nvme8", 00:21:47.180 "trtype": "tcp", 00:21:47.180 "traddr": "10.0.0.2", 00:21:47.180 "adrfam": "ipv4", 00:21:47.180 "trsvcid": "4420", 00:21:47.180 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:47.180 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:47.180 "hdgst": false, 00:21:47.180 "ddgst": false 00:21:47.180 }, 00:21:47.180 "method": "bdev_nvme_attach_controller" 00:21:47.180 },{ 00:21:47.180 "params": { 00:21:47.180 "name": "Nvme9", 00:21:47.180 "trtype": "tcp", 00:21:47.180 "traddr": "10.0.0.2", 00:21:47.180 "adrfam": "ipv4", 00:21:47.180 "trsvcid": "4420", 00:21:47.180 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:47.180 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:47.180 "hdgst": false, 00:21:47.180 "ddgst": false 00:21:47.180 }, 00:21:47.180 "method": "bdev_nvme_attach_controller" 00:21:47.180 },{ 00:21:47.180 "params": { 00:21:47.180 "name": "Nvme10", 00:21:47.180 "trtype": "tcp", 00:21:47.180 "traddr": "10.0.0.2", 00:21:47.180 "adrfam": "ipv4", 00:21:47.180 "trsvcid": "4420", 00:21:47.180 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:47.180 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:47.180 "hdgst": false, 00:21:47.180 "ddgst": false 00:21:47.180 }, 00:21:47.180 "method": "bdev_nvme_attach_controller" 00:21:47.180 }' 00:21:47.180 [2024-07-15 23:47:35.927387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.180 [2024-07-15 23:47:36.000065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.559 Running I/O for 10 seconds... 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:48.559 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:48.560 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:48.819 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:48.819 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:48.819 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:48.819 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:48.819 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:48.819 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.079 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:49.079 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:49.079 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:49.079 23:47:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1070393 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@942 -- # '[' -z 1070393 ']' 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # kill -0 1070393 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # uname 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1070393 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1070393' 00:21:49.355 killing process with pid 1070393 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@961 -- # kill 1070393 00:21:49.355 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # wait 1070393 00:21:49.355 [2024-07-15 23:47:38.180871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180959] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180965] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180977] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180983] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180991] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.180997] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.181003] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.355 [2024-07-15 23:47:38.181009] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181058] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181065] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181071] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181078] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181084] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181090] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181096] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181103] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181109] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181115] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181121] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181133] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181139] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181147] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181153] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181168] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181174] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181180] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181192] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181213] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181219] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181232] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181239] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181245] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181251] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181257] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181264] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181270] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181277] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181283] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181289] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181295] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181301] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181326] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181332] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181338] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.181344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59ad0 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182693] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182701] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182735] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182755] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182761] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182767] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182780] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182786] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182792] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182798] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182805] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182811] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182817] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182824] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182831] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182841] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182847] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182853] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182860] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182892] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182899] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182906] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.356 [2024-07-15 23:47:38.182913] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182981] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182987] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182993] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.182998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183016] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183022] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183028] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183034] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183040] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183046] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.183052] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3dd70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.184155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.357 [2024-07-15 23:47:38.184187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.357 [2024-07-15 23:47:38.184197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.357 [2024-07-15 23:47:38.184204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.357 [2024-07-15 23:47:38.184212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.357 [2024-07-15 23:47:38.184219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.357 [2024-07-15 23:47:38.184232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.357 [2024-07-15 23:47:38.184238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.357 [2024-07-15 23:47:38.184245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb18c70 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191135] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191162] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191170] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191179] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191186] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191193] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191200] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191242] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191248] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191274] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191280] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191286] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191292] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191298] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191304] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191310] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191317] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191322] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191329] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191335] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191341] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191353] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191359] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191365] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191371] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191377] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191383] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191389] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191396] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191402] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191410] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191429] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191436] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191441] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.357 [2024-07-15 23:47:38.191447] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191453] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191459] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191465] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191471] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191502] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191514] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191520] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191526] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.191554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a410 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192686] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192692] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192698] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192704] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192711] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192718] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192724] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192730] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192782] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192788] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192806] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192839] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192846] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192852] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192858] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192865] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192871] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192878] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192884] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192889] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192895] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192901] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192907] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192932] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192943] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192949] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192955] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192962] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192968] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.192999] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.193005] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.193011] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.193017] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.193023] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.193029] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5a8d0 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.194161] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.358 [2024-07-15 23:47:38.194182] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194202] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194208] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194214] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194220] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194231] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194237] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194243] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194249] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194255] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194261] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194268] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194273] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194279] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194285] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194291] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194297] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194303] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194312] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194337] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194344] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194350] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194356] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194398] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194404] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194416] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194422] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194428] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194434] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194440] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194446] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194452] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194458] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194469] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194476] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194482] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194488] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194495] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194501] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194507] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194513] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194530] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194536] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194542] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194548] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194554] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.194560] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b230 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195583] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195589] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195596] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195603] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195609] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195615] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195621] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195627] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195639] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195645] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195654] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195660] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195666] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195671] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195689] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195700] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195706] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.359 [2024-07-15 23:47:38.195737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195771] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195812] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195850] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195879] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195919] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195925] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195931] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195937] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195942] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195948] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.195954] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5b6d0 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196744] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196757] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196764] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196772] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196779] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196785] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196791] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196797] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196802] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196808] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196814] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196820] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196826] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196832] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196838] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196844] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196849] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196862] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196868] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196874] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196902] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196908] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196914] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196920] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196926] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196933] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196939] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196945] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196951] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196957] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196963] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.360 [2024-07-15 23:47:38.196969] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.196974] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.196980] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.196986] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.196992] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.196998] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197004] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197010] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197015] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197021] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197027] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197032] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197038] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197044] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197050] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197056] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197062] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197068] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197074] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197080] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197086] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197094] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197102] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197108] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197114] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197120] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197127] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5bb90 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197695] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d8d0 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197715] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d8d0 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197722] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d8d0 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.197729] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3d8d0 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.201574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce40d0 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.201683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b190 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.201766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x667340 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.201844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb551d0 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.201922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.201978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.201984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5cb30 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.202007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.202015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.202023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.202030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.202037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.202044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.202051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.202058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.202064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccd8b0 is same with the state(5) to be set 00:21:49.361 [2024-07-15 23:47:38.202088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.202096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.361 [2024-07-15 23:47:38.202103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.361 [2024-07-15 23:47:38.202110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced050 is same with the state(5) to be set 00:21:49.362 [2024-07-15 23:47:38.202168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce48d0 is same with the state(5) to be set 00:21:49.362 [2024-07-15 23:47:38.202248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb18c70 (9): Bad file descriptor 00:21:49.362 [2024-07-15 23:47:38.202272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.362 [2024-07-15 23:47:38.202322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5fbf0 is same with the state(5) to be set 00:21:49.362 [2024-07-15 23:47:38.202382] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.362 [2024-07-15 23:47:38.202450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.362 [2024-07-15 23:47:38.202907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.362 [2024-07-15 23:47:38.202914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.202922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.202928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.202936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.202943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.202950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.202957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.202965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.202971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.202979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.202985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.202994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.203401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.203464] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbe0490 was disconnected and freed. reset controller. 00:21:49.363 [2024-07-15 23:47:38.219027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.363 [2024-07-15 23:47:38.219205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.363 [2024-07-15 23:47:38.219213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.364 [2024-07-15 23:47:38.219730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.364 [2024-07-15 23:47:38.219736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.219993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.219999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.220007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.220014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.220992] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc42ef0 was disconnected and freed. reset controller. 00:21:49.365 [2024-07-15 23:47:38.221058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce40d0 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b190 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x667340 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb551d0 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5cb30 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccd8b0 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xced050 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce48d0 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5fbf0 (9): Bad file descriptor 00:21:49.365 [2024-07-15 23:47:38.221199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.365 [2024-07-15 23:47:38.221485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.365 [2024-07-15 23:47:38.221493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.221985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.221993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.366 [2024-07-15 23:47:38.222110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.366 [2024-07-15 23:47:38.222118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.222124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.222132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.222138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.222146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc59ea0 is same with the state(5) to be set 00:21:49.367 [2024-07-15 23:47:38.222214] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc59ea0 was disconnected and freed. reset controller. 00:21:49.367 [2024-07-15 23:47:38.222221] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.367 [2024-07-15 23:47:38.223343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.367 [2024-07-15 23:47:38.223935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.367 [2024-07-15 23:47:38.223942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.223950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.223956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.223964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.223971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.223979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.223986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.223994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224367] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb14040 was disconnected and freed. reset controller. 00:21:49.368 [2024-07-15 23:47:38.224561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.368 [2024-07-15 23:47:38.224743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.368 [2024-07-15 23:47:38.224752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.224988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.224996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.369 [2024-07-15 23:47:38.225381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.369 [2024-07-15 23:47:38.225387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.225518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.225582] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc41a60 was disconnected and freed. reset controller. 00:21:49.370 [2024-07-15 23:47:38.226493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:49.370 [2024-07-15 23:47:38.229514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.370 [2024-07-15 23:47:38.229805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.370 [2024-07-15 23:47:38.229821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce48d0 with addr=10.0.0.2, port=4420 00:21:49.370 [2024-07-15 23:47:38.229829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce48d0 is same with the state(5) to be set 00:21:49.370 [2024-07-15 23:47:38.230550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:49.370 [2024-07-15 23:47:38.230572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:49.370 [2024-07-15 23:47:38.230581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:49.370 [2024-07-15 23:47:38.230762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.370 [2024-07-15 23:47:38.230775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb18c70 with addr=10.0.0.2, port=4420 00:21:49.370 [2024-07-15 23:47:38.230783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb18c70 is same with the state(5) to be set 00:21:49.370 [2024-07-15 23:47:38.230794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce48d0 (9): Bad file descriptor 00:21:49.370 [2024-07-15 23:47:38.231077] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.370 [2024-07-15 23:47:38.231125] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.370 [2024-07-15 23:47:38.231445] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.370 [2024-07-15 23:47:38.231494] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:49.370 [2024-07-15 23:47:38.231934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.370 [2024-07-15 23:47:38.231947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccd8b0 with addr=10.0.0.2, port=4420 00:21:49.370 [2024-07-15 23:47:38.231955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccd8b0 is same with the state(5) to be set 00:21:49.370 [2024-07-15 23:47:38.232161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.370 [2024-07-15 23:47:38.232170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x667340 with addr=10.0.0.2, port=4420 00:21:49.370 [2024-07-15 23:47:38.232177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x667340 is same with the state(5) to be set 00:21:49.370 [2024-07-15 23:47:38.232369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.370 [2024-07-15 23:47:38.232378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce40d0 with addr=10.0.0.2, port=4420 00:21:49.370 [2024-07-15 23:47:38.232385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce40d0 is same with the state(5) to be set 00:21:49.370 [2024-07-15 23:47:38.232396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb18c70 (9): Bad file descriptor 00:21:49.370 [2024-07-15 23:47:38.232404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:49.370 [2024-07-15 23:47:38.232411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:49.370 [2024-07-15 23:47:38.232419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:49.370 [2024-07-15 23:47:38.232546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.370 [2024-07-15 23:47:38.232573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccd8b0 (9): Bad file descriptor 00:21:49.370 [2024-07-15 23:47:38.232582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x667340 (9): Bad file descriptor 00:21:49.370 [2024-07-15 23:47:38.232590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce40d0 (9): Bad file descriptor 00:21:49.370 [2024-07-15 23:47:38.232598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.370 [2024-07-15 23:47:38.232604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.370 [2024-07-15 23:47:38.232610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.370 [2024-07-15 23:47:38.232663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.370 [2024-07-15 23:47:38.232929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.370 [2024-07-15 23:47:38.232935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.232943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.232949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.232958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.232964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.232973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.232979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.232987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.232993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.371 [2024-07-15 23:47:38.233563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.371 [2024-07-15 23:47:38.233569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.233577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.233583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.233591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.233597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.233605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.233612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.233620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbe1920 is same with the state(5) to be set 00:21:49.372 [2024-07-15 23:47:38.234641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.234992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.234999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.372 [2024-07-15 23:47:38.235130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.372 [2024-07-15 23:47:38.235139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.235593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.235600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb12b70 is same with the state(5) to be set 00:21:49.373 [2024-07-15 23:47:38.236591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.373 [2024-07-15 23:47:38.236765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.373 [2024-07-15 23:47:38.236771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.236988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.236994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.374 [2024-07-15 23:47:38.237407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.374 [2024-07-15 23:47:38.237416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.237551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.237557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.238990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.238998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.239005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.239013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.239019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.239027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.239033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.239041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.239048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.239056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.375 [2024-07-15 23:47:38.239062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.375 [2024-07-15 23:47:38.239070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.239524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.239532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5bde0 is same with the state(5) to be set 00:21:49.376 [2024-07-15 23:47:38.240536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.240547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.240557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.240564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.240573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.240579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.240588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.240595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.240603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.240610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.376 [2024-07-15 23:47:38.240618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.376 [2024-07-15 23:47:38.240625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.240986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.240994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.377 [2024-07-15 23:47:38.241257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.377 [2024-07-15 23:47:38.241265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.378 [2024-07-15 23:47:38.241491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.378 [2024-07-15 23:47:38.241498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2b0 is same with the state(5) to be set 00:21:49.378 [2024-07-15 23:47:38.242757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.378 [2024-07-15 23:47:38.242772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:49.378 [2024-07-15 23:47:38.242782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:49.378 [2024-07-15 23:47:38.242792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:49.378 [2024-07-15 23:47:38.242822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:49.378 [2024-07-15 23:47:38.242829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:49.378 [2024-07-15 23:47:38.242837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:49.378 [2024-07-15 23:47:38.242847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:49.378 [2024-07-15 23:47:38.242853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:49.378 [2024-07-15 23:47:38.242859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:49.378 [2024-07-15 23:47:38.242870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:49.378 [2024-07-15 23:47:38.242875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:49.378 [2024-07-15 23:47:38.242884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:49.378 [2024-07-15 23:47:38.242914] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.378 [2024-07-15 23:47:38.242925] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.378 [2024-07-15 23:47:38.242934] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.378 [2024-07-15 23:47:38.242947] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.378 [2024-07-15 23:47:38.242959] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.378 [2024-07-15 23:47:38.243010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:49.378 task offset: 28160 on job bdev=Nvme2n1 fails 00:21:49.378 00:21:49.378 Latency(us) 00:21:49.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.378 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme1n1 ended in about 0.91 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme1n1 : 0.91 211.73 13.23 70.58 0.00 224483.95 19831.76 211538.81 00:21:49.378 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme2n1 : 0.90 212.74 13.30 70.91 0.00 219428.95 19147.91 221568.67 00:21:49.378 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme3n1 : 0.91 210.09 13.13 70.03 0.00 218314.80 15614.66 218833.25 00:21:49.378 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme4n1 : 0.92 209.64 13.10 69.88 0.00 214899.09 19147.91 217009.64 00:21:49.378 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme5n1 : 0.91 211.49 13.22 70.50 0.00 208934.73 17552.25 217921.45 00:21:49.378 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme6n1 ended in about 0.92 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme6n1 : 0.92 209.19 13.07 69.73 0.00 207498.69 20743.57 221568.67 00:21:49.378 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme7n1 ended in about 0.92 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme7n1 : 0.92 208.74 13.05 69.58 0.00 204057.60 16298.52 207891.59 00:21:49.378 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme8n1 ended in about 0.92 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme8n1 : 0.92 208.30 13.02 69.43 0.00 200580.90 18692.01 199685.34 00:21:49.378 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme9n1 ended in about 0.91 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme9n1 : 0.91 211.26 13.20 70.42 0.00 193388.86 9801.91 224304.08 00:21:49.378 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:49.378 Job: Nvme10n1 ended in about 0.91 seconds with error 00:21:49.378 Verification LBA range: start 0x0 length 0x400 00:21:49.378 Nvme10n1 : 0.91 210.85 13.18 70.65 0.00 189526.76 24162.84 237069.36 00:21:49.378 =================================================================================================================== 00:21:49.378 Total : 2104.02 131.50 701.71 0.00 208118.69 9801.91 237069.36 00:21:49.378 [2024-07-15 23:47:38.265956] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:49.378 [2024-07-15 23:47:38.265993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:49.378 [2024-07-15 23:47:38.266009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.378 [2024-07-15 23:47:38.266017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.378 [2024-07-15 23:47:38.266023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.378 [2024-07-15 23:47:38.266360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.378 [2024-07-15 23:47:38.266377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xced050 with addr=10.0.0.2, port=4420 00:21:49.378 [2024-07-15 23:47:38.266387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xced050 is same with the state(5) to be set 00:21:49.378 [2024-07-15 23:47:38.266684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.378 [2024-07-15 23:47:38.266693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb551d0 with addr=10.0.0.2, port=4420 00:21:49.378 [2024-07-15 23:47:38.266700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb551d0 is same with the state(5) to be set 00:21:49.378 [2024-07-15 23:47:38.267008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.378 [2024-07-15 23:47:38.267019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5cb30 with addr=10.0.0.2, port=4420 00:21:49.378 [2024-07-15 23:47:38.267026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5cb30 is same with the state(5) to be set 00:21:49.378 [2024-07-15 23:47:38.268175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:49.379 [2024-07-15 23:47:38.268189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.379 [2024-07-15 23:47:38.268504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.379 [2024-07-15 23:47:38.268517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb3b190 with addr=10.0.0.2, port=4420 00:21:49.379 [2024-07-15 23:47:38.268525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb3b190 is same with the state(5) to be set 00:21:49.379 [2024-07-15 23:47:38.268773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.379 [2024-07-15 23:47:38.268783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5fbf0 with addr=10.0.0.2, port=4420 00:21:49.379 [2024-07-15 23:47:38.268790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5fbf0 is same with the state(5) to be set 00:21:49.379 [2024-07-15 23:47:38.268802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xced050 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.268813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb551d0 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.268822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5cb30 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.268866] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.379 [2024-07-15 23:47:38.268880] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.379 [2024-07-15 23:47:38.268889] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:49.379 [2024-07-15 23:47:38.269248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.379 [2024-07-15 23:47:38.269267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce48d0 with addr=10.0.0.2, port=4420 00:21:49.379 [2024-07-15 23:47:38.269274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce48d0 is same with the state(5) to be set 00:21:49.379 [2024-07-15 23:47:38.269421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.379 [2024-07-15 23:47:38.269430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb18c70 with addr=10.0.0.2, port=4420 00:21:49.379 [2024-07-15 23:47:38.269437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb18c70 is same with the state(5) to be set 00:21:49.379 [2024-07-15 23:47:38.269445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3b190 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.269453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5fbf0 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.269461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.269467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.269473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:49.379 [2024-07-15 23:47:38.269483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.269489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.269495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:49.379 [2024-07-15 23:47:38.269504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.269509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.269515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:49.379 [2024-07-15 23:47:38.269572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:49.379 [2024-07-15 23:47:38.269582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:49.379 [2024-07-15 23:47:38.269590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:49.379 [2024-07-15 23:47:38.269597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.269603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.269608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.269629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce48d0 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.269638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb18c70 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.269644] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.269650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.269656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:49.379 [2024-07-15 23:47:38.269664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.269670] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.269676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:49.379 [2024-07-15 23:47:38.269703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.269710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.269957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.379 [2024-07-15 23:47:38.269967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xce40d0 with addr=10.0.0.2, port=4420 00:21:49.379 [2024-07-15 23:47:38.269973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce40d0 is same with the state(5) to be set 00:21:49.379 [2024-07-15 23:47:38.270217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.379 [2024-07-15 23:47:38.270231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x667340 with addr=10.0.0.2, port=4420 00:21:49.379 [2024-07-15 23:47:38.270238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x667340 is same with the state(5) to be set 00:21:49.379 [2024-07-15 23:47:38.270485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.379 [2024-07-15 23:47:38.270494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xccd8b0 with addr=10.0.0.2, port=4420 00:21:49.379 [2024-07-15 23:47:38.270500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xccd8b0 is same with the state(5) to be set 00:21:49.379 [2024-07-15 23:47:38.270506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.270512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.270518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:49.379 [2024-07-15 23:47:38.270527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.270533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.270539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.379 [2024-07-15 23:47:38.270565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.270572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.270580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xce40d0 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.270588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x667340 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.270596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xccd8b0 (9): Bad file descriptor 00:21:49.379 [2024-07-15 23:47:38.270621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.270628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.270634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:49.379 [2024-07-15 23:47:38.270642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.270647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.270653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:49.379 [2024-07-15 23:47:38.270661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:49.379 [2024-07-15 23:47:38.270667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:49.379 [2024-07-15 23:47:38.270676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:49.379 [2024-07-15 23:47:38.270698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.270705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.379 [2024-07-15 23:47:38.270710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.948 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:49.948 23:47:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1070676 00:21:50.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1070676) - No such process 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.886 rmmod nvme_tcp 00:21:50.886 rmmod nvme_fabrics 00:21:50.886 rmmod nvme_keyring 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.886 23:47:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.859 23:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:52.859 00:21:52.859 real 0m7.646s 00:21:52.859 user 0m18.501s 00:21:52.859 sys 0m1.291s 00:21:52.859 23:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:52.859 23:47:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 ************************************ 00:21:52.859 END TEST nvmf_shutdown_tc3 00:21:52.859 ************************************ 00:21:52.859 23:47:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:21:52.859 23:47:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:52.859 00:21:52.859 real 0m30.458s 00:21:52.859 user 1m16.594s 00:21:52.859 sys 0m7.898s 00:21:52.859 23:47:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:52.859 23:47:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:52.859 ************************************ 00:21:52.859 END TEST nvmf_shutdown 00:21:52.859 ************************************ 00:21:52.859 23:47:41 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:21:52.859 23:47:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:52.859 23:47:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.859 23:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.118 23:47:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:53.118 23:47:41 nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:53.118 23:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.118 23:47:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:53.118 23:47:41 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:53.118 23:47:41 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:53.118 23:47:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:53.118 23:47:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.118 ************************************ 00:21:53.118 START TEST nvmf_multicontroller 00:21:53.118 ************************************ 00:21:53.118 23:47:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:53.118 * Looking for test storage... 00:21:53.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.118 23:47:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.118 23:47:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.118 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.119 23:47:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:58.396 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:58.396 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:58.396 Found net devices under 0000:86:00.0: cvl_0_0 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:58.396 Found net devices under 0000:86:00.1: cvl_0_1 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.396 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.397 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:21:58.656 00:21:58.656 --- 10.0.0.2 ping statistics --- 00:21:58.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.656 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:21:58.656 00:21:58.656 --- 10.0.0.1 ping statistics --- 00:21:58.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.656 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1074718 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1074718 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 1074718 ']' 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:58.656 23:47:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:58.656 [2024-07-15 23:47:47.597389] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:58.656 [2024-07-15 23:47:47.597430] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.916 [2024-07-15 23:47:47.657513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.916 [2024-07-15 23:47:47.733825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.916 [2024-07-15 23:47:47.733866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.916 [2024-07-15 23:47:47.733874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.916 [2024-07-15 23:47:47.733880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.916 [2024-07-15 23:47:47.733886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.916 [2024-07-15 23:47:47.733983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.916 [2024-07-15 23:47:47.734073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.916 [2024-07-15 23:47:47.734073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.483 [2024-07-15 23:47:48.450763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.483 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 Malloc0 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 [2024-07-15 23:47:48.513677] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 [2024-07-15 23:47:48.521595] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 Malloc1 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1074962 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1074962 /var/tmp/bdevperf.sock 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@823 -- # '[' -z 1074962 ']' 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:59.742 23:47:48 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # return 0 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.677 NVMe0n1 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:00.677 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.965 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:00.965 1 00:22:00.965 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:00.965 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:22:00.965 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:00.965 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:22:00.965 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.966 request: 00:22:00.966 { 00:22:00.966 "name": "NVMe0", 00:22:00.966 "trtype": "tcp", 00:22:00.966 "traddr": "10.0.0.2", 00:22:00.966 "adrfam": "ipv4", 00:22:00.966 "trsvcid": "4420", 00:22:00.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.966 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:00.966 "hostaddr": "10.0.0.2", 00:22:00.966 "hostsvcid": "60000", 00:22:00.966 "prchk_reftag": false, 00:22:00.966 "prchk_guard": false, 00:22:00.966 "hdgst": false, 00:22:00.966 "ddgst": false, 00:22:00.966 "method": "bdev_nvme_attach_controller", 00:22:00.966 "req_id": 1 00:22:00.966 } 00:22:00.966 Got JSON-RPC error response 00:22:00.966 response: 00:22:00.966 { 00:22:00.966 "code": -114, 00:22:00.966 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:00.966 } 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.966 request: 00:22:00.966 { 00:22:00.966 "name": "NVMe0", 00:22:00.966 "trtype": "tcp", 00:22:00.966 "traddr": "10.0.0.2", 00:22:00.966 "adrfam": "ipv4", 00:22:00.966 "trsvcid": "4420", 00:22:00.966 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:00.966 "hostaddr": "10.0.0.2", 00:22:00.966 "hostsvcid": "60000", 00:22:00.966 "prchk_reftag": false, 00:22:00.966 "prchk_guard": false, 00:22:00.966 "hdgst": false, 00:22:00.966 "ddgst": false, 00:22:00.966 "method": "bdev_nvme_attach_controller", 00:22:00.966 "req_id": 1 00:22:00.966 } 00:22:00.966 Got JSON-RPC error response 00:22:00.966 response: 00:22:00.966 { 00:22:00.966 "code": -114, 00:22:00.966 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:00.966 } 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.966 request: 00:22:00.966 { 00:22:00.966 "name": "NVMe0", 00:22:00.966 "trtype": "tcp", 00:22:00.966 "traddr": "10.0.0.2", 00:22:00.966 "adrfam": "ipv4", 00:22:00.966 "trsvcid": "4420", 00:22:00.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.966 "hostaddr": "10.0.0.2", 00:22:00.966 "hostsvcid": "60000", 00:22:00.966 "prchk_reftag": false, 00:22:00.966 "prchk_guard": false, 00:22:00.966 "hdgst": false, 00:22:00.966 "ddgst": false, 00:22:00.966 "multipath": "disable", 00:22:00.966 "method": "bdev_nvme_attach_controller", 00:22:00.966 "req_id": 1 00:22:00.966 } 00:22:00.966 Got JSON-RPC error response 00:22:00.966 response: 00:22:00.966 { 00:22:00.966 "code": -114, 00:22:00.966 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:00.966 } 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@642 -- # local es=0 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:00.966 request: 00:22:00.966 { 00:22:00.966 "name": "NVMe0", 00:22:00.966 "trtype": "tcp", 00:22:00.966 "traddr": "10.0.0.2", 00:22:00.966 "adrfam": "ipv4", 00:22:00.966 "trsvcid": "4420", 00:22:00.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.966 "hostaddr": "10.0.0.2", 00:22:00.966 "hostsvcid": "60000", 00:22:00.966 "prchk_reftag": false, 00:22:00.966 "prchk_guard": false, 00:22:00.966 "hdgst": false, 00:22:00.966 "ddgst": false, 00:22:00.966 "multipath": "failover", 00:22:00.966 "method": "bdev_nvme_attach_controller", 00:22:00.966 "req_id": 1 00:22:00.966 } 00:22:00.966 Got JSON-RPC error response 00:22:00.966 response: 00:22:00.966 { 00:22:00.966 "code": -114, 00:22:00.966 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:00.966 } 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@645 -- # es=1 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:00.966 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.225 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:01.225 23:47:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.225 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:01.225 23:47:50 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.596 0 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1074962 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 1074962 ']' 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 1074962 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1074962 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1074962' 00:22:02.596 killing process with pid 1074962 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 1074962 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 1074962 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # sort -u 00:22:02.596 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # cat 00:22:02.596 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:02.596 [2024-07-15 23:47:48.623252] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:02.596 [2024-07-15 23:47:48.623301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1074962 ] 00:22:02.596 [2024-07-15 23:47:48.677882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.596 [2024-07-15 23:47:48.757321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.596 [2024-07-15 23:47:50.090536] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name dd4fc0e6-ad1b-4075-80ba-060a83767193 already exists 00:22:02.596 [2024-07-15 23:47:50.090571] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:dd4fc0e6-ad1b-4075-80ba-060a83767193 alias for bdev NVMe1n1 00:22:02.596 [2024-07-15 23:47:50.090579] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:02.596 Running I/O for 1 seconds... 00:22:02.596 00:22:02.596 Latency(us) 00:22:02.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.596 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:02.596 NVMe0n1 : 1.00 24631.91 96.22 0.00 0.00 5184.82 2649.93 8377.21 00:22:02.596 =================================================================================================================== 00:22:02.596 Total : 24631.91 96.22 0.00 0.00 5184.82 2649.93 8377.21 00:22:02.596 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.596 00:22:02.596 Latency(us) 00:22:02.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.597 =================================================================================================================== 00:22:02.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.597 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1606 -- # read -r file 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:02.597 rmmod nvme_tcp 00:22:02.597 rmmod nvme_fabrics 00:22:02.597 rmmod nvme_keyring 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1074718 ']' 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1074718 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@942 -- # '[' -z 1074718 ']' 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # kill -0 1074718 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # uname 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:02.597 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1074718 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1074718' 00:22:02.866 killing process with pid 1074718 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@961 -- # kill 1074718 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # wait 1074718 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.866 23:47:51 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.400 23:47:53 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:05.400 00:22:05.400 real 0m11.986s 00:22:05.400 user 0m16.897s 00:22:05.400 sys 0m4.947s 00:22:05.400 23:47:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:05.400 23:47:53 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:05.400 ************************************ 00:22:05.400 END TEST nvmf_multicontroller 00:22:05.400 ************************************ 00:22:05.400 23:47:53 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:05.400 23:47:53 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:05.400 23:47:53 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:05.400 23:47:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:05.400 23:47:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:05.400 ************************************ 00:22:05.400 START TEST nvmf_aer 00:22:05.400 ************************************ 00:22:05.400 23:47:53 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:05.400 * Looking for test storage... 00:22:05.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.400 23:47:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.671 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.671 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.671 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:10.672 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:10.672 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:10.672 Found net devices under 0000:86:00.0: cvl_0_0 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:10.672 Found net devices under 0000:86:00.1: cvl_0_1 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:10.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:22:10.672 00:22:10.672 --- 10.0.0.2 ping statistics --- 00:22:10.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.672 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:10.672 23:47:58 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:22:10.672 00:22:10.672 --- 10.0.0.1 ping statistics --- 00:22:10.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.672 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1078797 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1078797 00:22:10.672 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@823 -- # '[' -z 1078797 ']' 00:22:10.673 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.673 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:10.673 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.673 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:10.673 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:10.673 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:10.673 [2024-07-15 23:47:59.084779] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:10.673 [2024-07-15 23:47:59.084824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.673 [2024-07-15 23:47:59.143129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.673 [2024-07-15 23:47:59.225093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.673 [2024-07-15 23:47:59.225128] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.673 [2024-07-15 23:47:59.225135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.673 [2024-07-15 23:47:59.225141] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.673 [2024-07-15 23:47:59.225146] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.673 [2024-07-15 23:47:59.225186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.673 [2024-07-15 23:47:59.225286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.673 [2024-07-15 23:47:59.225311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.673 [2024-07-15 23:47:59.225312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.932 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:10.932 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # return 0 00:22:10.932 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:10.932 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:10.932 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 [2024-07-15 23:47:59.938306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 Malloc0 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 [2024-07-15 23:47:59.990192] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.191 23:47:59 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.191 [ 00:22:11.191 { 00:22:11.191 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:11.191 "subtype": "Discovery", 00:22:11.191 "listen_addresses": [], 00:22:11.191 "allow_any_host": true, 00:22:11.191 "hosts": [] 00:22:11.191 }, 00:22:11.191 { 00:22:11.191 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.191 "subtype": "NVMe", 00:22:11.191 "listen_addresses": [ 00:22:11.191 { 00:22:11.191 "trtype": "TCP", 00:22:11.191 "adrfam": "IPv4", 00:22:11.191 "traddr": "10.0.0.2", 00:22:11.191 "trsvcid": "4420" 00:22:11.191 } 00:22:11.191 ], 00:22:11.191 "allow_any_host": true, 00:22:11.191 "hosts": [], 00:22:11.191 "serial_number": "SPDK00000000000001", 00:22:11.191 "model_number": "SPDK bdev Controller", 00:22:11.191 "max_namespaces": 2, 00:22:11.191 "min_cntlid": 1, 00:22:11.191 "max_cntlid": 65519, 00:22:11.191 "namespaces": [ 00:22:11.191 { 00:22:11.191 "nsid": 1, 00:22:11.191 "bdev_name": "Malloc0", 00:22:11.191 "name": "Malloc0", 00:22:11.191 "nguid": "556345673AC64EBEA65718E376AD108A", 00:22:11.191 "uuid": "55634567-3ac6-4ebe-a657-18e376ad108a" 00:22:11.191 } 00:22:11.191 ] 00:22:11.191 } 00:22:11.191 ] 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1078988 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1259 -- # local i=0 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 0 -lt 200 ']' 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=1 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 1 -lt 200 ']' 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # i=2 00:22:11.191 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1270 -- # return 0 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 Malloc1 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 Asynchronous Event Request test 00:22:11.450 Attaching to 10.0.0.2 00:22:11.450 Attached to 10.0.0.2 00:22:11.450 Registering asynchronous event callbacks... 00:22:11.450 Starting namespace attribute notice tests for all controllers... 00:22:11.450 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:11.450 aer_cb - Changed Namespace 00:22:11.450 Cleaning up... 00:22:11.450 [ 00:22:11.450 { 00:22:11.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:11.450 "subtype": "Discovery", 00:22:11.450 "listen_addresses": [], 00:22:11.450 "allow_any_host": true, 00:22:11.450 "hosts": [] 00:22:11.450 }, 00:22:11.450 { 00:22:11.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.450 "subtype": "NVMe", 00:22:11.450 "listen_addresses": [ 00:22:11.450 { 00:22:11.450 "trtype": "TCP", 00:22:11.450 "adrfam": "IPv4", 00:22:11.450 "traddr": "10.0.0.2", 00:22:11.450 "trsvcid": "4420" 00:22:11.450 } 00:22:11.450 ], 00:22:11.450 "allow_any_host": true, 00:22:11.450 "hosts": [], 00:22:11.450 "serial_number": "SPDK00000000000001", 00:22:11.450 "model_number": "SPDK bdev Controller", 00:22:11.450 "max_namespaces": 2, 00:22:11.450 "min_cntlid": 1, 00:22:11.450 "max_cntlid": 65519, 00:22:11.450 "namespaces": [ 00:22:11.450 { 00:22:11.450 "nsid": 1, 00:22:11.450 "bdev_name": "Malloc0", 00:22:11.450 "name": "Malloc0", 00:22:11.450 "nguid": "556345673AC64EBEA65718E376AD108A", 00:22:11.450 "uuid": "55634567-3ac6-4ebe-a657-18e376ad108a" 00:22:11.450 }, 00:22:11.450 { 00:22:11.450 "nsid": 2, 00:22:11.450 "bdev_name": "Malloc1", 00:22:11.450 "name": "Malloc1", 00:22:11.450 "nguid": "E69FA4A720B44D02B1FD3D0F08039AF3", 00:22:11.450 "uuid": "e69fa4a7-20b4-4d02-b1fd-3d0f08039af3" 00:22:11.450 } 00:22:11.450 ] 00:22:11.450 } 00:22:11.450 ] 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1078988 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.450 rmmod nvme_tcp 00:22:11.450 rmmod nvme_fabrics 00:22:11.450 rmmod nvme_keyring 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1078797 ']' 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1078797 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@942 -- # '[' -z 1078797 ']' 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # kill -0 1078797 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # uname 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:11.450 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1078797 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1078797' 00:22:11.708 killing process with pid 1078797 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@961 -- # kill 1078797 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # wait 1078797 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.708 23:48:00 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.239 23:48:02 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.239 00:22:14.239 real 0m8.740s 00:22:14.239 user 0m6.980s 00:22:14.239 sys 0m4.249s 00:22:14.239 23:48:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:14.239 23:48:02 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:14.239 ************************************ 00:22:14.239 END TEST nvmf_aer 00:22:14.239 ************************************ 00:22:14.239 23:48:02 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:14.239 23:48:02 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:14.240 23:48:02 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:14.240 23:48:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:14.240 23:48:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:14.240 ************************************ 00:22:14.240 START TEST nvmf_async_init 00:22:14.240 ************************************ 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:14.240 * Looking for test storage... 00:22:14.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=705ad85fb5ed4e2883a816cfae6c7786 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.240 23:48:02 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:19.586 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.586 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:19.587 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:19.587 Found net devices under 0000:86:00.0: cvl_0_0 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:19.587 Found net devices under 0000:86:00.1: cvl_0_1 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:22:19.587 00:22:19.587 --- 10.0.0.2 ping statistics --- 00:22:19.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.587 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:22:19.587 00:22:19.587 --- 10.0.0.1 ping statistics --- 00:22:19.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.587 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1082614 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1082614 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@823 -- # '[' -z 1082614 ']' 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:19.587 23:48:07 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.587 [2024-07-15 23:48:07.864403] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:19.587 [2024-07-15 23:48:07.864445] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.587 [2024-07-15 23:48:07.921743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.587 [2024-07-15 23:48:08.001337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.587 [2024-07-15 23:48:08.001370] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.587 [2024-07-15 23:48:08.001378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:19.587 [2024-07-15 23:48:08.001384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:19.587 [2024-07-15 23:48:08.001389] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.587 [2024-07-15 23:48:08.001407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.845 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:19.845 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # return 0 00:22:19.845 23:48:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.845 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:19.845 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.845 23:48:08 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.845 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.846 [2024-07-15 23:48:08.692729] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.846 null0 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 705ad85fb5ed4e2883a816cfae6c7786 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:19.846 [2024-07-15 23:48:08.732932] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:19.846 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.104 nvme0n1 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.104 [ 00:22:20.104 { 00:22:20.104 "name": "nvme0n1", 00:22:20.104 "aliases": [ 00:22:20.104 "705ad85f-b5ed-4e28-83a8-16cfae6c7786" 00:22:20.104 ], 00:22:20.104 "product_name": "NVMe disk", 00:22:20.104 "block_size": 512, 00:22:20.104 "num_blocks": 2097152, 00:22:20.104 "uuid": "705ad85f-b5ed-4e28-83a8-16cfae6c7786", 00:22:20.104 "assigned_rate_limits": { 00:22:20.104 "rw_ios_per_sec": 0, 00:22:20.104 "rw_mbytes_per_sec": 0, 00:22:20.104 "r_mbytes_per_sec": 0, 00:22:20.104 "w_mbytes_per_sec": 0 00:22:20.104 }, 00:22:20.104 "claimed": false, 00:22:20.104 "zoned": false, 00:22:20.104 "supported_io_types": { 00:22:20.104 "read": true, 00:22:20.104 "write": true, 00:22:20.104 "unmap": false, 00:22:20.104 "flush": true, 00:22:20.104 "reset": true, 00:22:20.104 "nvme_admin": true, 00:22:20.104 "nvme_io": true, 00:22:20.104 "nvme_io_md": false, 00:22:20.104 "write_zeroes": true, 00:22:20.104 "zcopy": false, 00:22:20.104 "get_zone_info": false, 00:22:20.104 "zone_management": false, 00:22:20.104 "zone_append": false, 00:22:20.104 "compare": true, 00:22:20.104 "compare_and_write": true, 00:22:20.104 "abort": true, 00:22:20.104 "seek_hole": false, 00:22:20.104 "seek_data": false, 00:22:20.104 "copy": true, 00:22:20.104 "nvme_iov_md": false 00:22:20.104 }, 00:22:20.104 "memory_domains": [ 00:22:20.104 { 00:22:20.104 "dma_device_id": "system", 00:22:20.104 "dma_device_type": 1 00:22:20.104 } 00:22:20.104 ], 00:22:20.104 "driver_specific": { 00:22:20.104 "nvme": [ 00:22:20.104 { 00:22:20.104 "trid": { 00:22:20.104 "trtype": "TCP", 00:22:20.104 "adrfam": "IPv4", 00:22:20.104 "traddr": "10.0.0.2", 00:22:20.104 "trsvcid": "4420", 00:22:20.104 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:20.104 }, 00:22:20.104 "ctrlr_data": { 00:22:20.104 "cntlid": 1, 00:22:20.104 "vendor_id": "0x8086", 00:22:20.104 "model_number": "SPDK bdev Controller", 00:22:20.104 "serial_number": "00000000000000000000", 00:22:20.104 "firmware_revision": "24.09", 00:22:20.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.104 "oacs": { 00:22:20.104 "security": 0, 00:22:20.104 "format": 0, 00:22:20.104 "firmware": 0, 00:22:20.104 "ns_manage": 0 00:22:20.104 }, 00:22:20.104 "multi_ctrlr": true, 00:22:20.104 "ana_reporting": false 00:22:20.104 }, 00:22:20.104 "vs": { 00:22:20.104 "nvme_version": "1.3" 00:22:20.104 }, 00:22:20.104 "ns_data": { 00:22:20.104 "id": 1, 00:22:20.104 "can_share": true 00:22:20.104 } 00:22:20.104 } 00:22:20.104 ], 00:22:20.104 "mp_policy": "active_passive" 00:22:20.104 } 00:22:20.104 } 00:22:20.104 ] 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.104 23:48:08 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.104 [2024-07-15 23:48:08.981477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:20.104 [2024-07-15 23:48:08.981537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2c250 (9): Bad file descriptor 00:22:20.364 [2024-07-15 23:48:09.113324] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 [ 00:22:20.364 { 00:22:20.364 "name": "nvme0n1", 00:22:20.364 "aliases": [ 00:22:20.364 "705ad85f-b5ed-4e28-83a8-16cfae6c7786" 00:22:20.364 ], 00:22:20.364 "product_name": "NVMe disk", 00:22:20.364 "block_size": 512, 00:22:20.364 "num_blocks": 2097152, 00:22:20.364 "uuid": "705ad85f-b5ed-4e28-83a8-16cfae6c7786", 00:22:20.364 "assigned_rate_limits": { 00:22:20.364 "rw_ios_per_sec": 0, 00:22:20.364 "rw_mbytes_per_sec": 0, 00:22:20.364 "r_mbytes_per_sec": 0, 00:22:20.364 "w_mbytes_per_sec": 0 00:22:20.364 }, 00:22:20.364 "claimed": false, 00:22:20.364 "zoned": false, 00:22:20.364 "supported_io_types": { 00:22:20.364 "read": true, 00:22:20.364 "write": true, 00:22:20.364 "unmap": false, 00:22:20.364 "flush": true, 00:22:20.364 "reset": true, 00:22:20.364 "nvme_admin": true, 00:22:20.364 "nvme_io": true, 00:22:20.364 "nvme_io_md": false, 00:22:20.364 "write_zeroes": true, 00:22:20.364 "zcopy": false, 00:22:20.364 "get_zone_info": false, 00:22:20.364 "zone_management": false, 00:22:20.364 "zone_append": false, 00:22:20.364 "compare": true, 00:22:20.364 "compare_and_write": true, 00:22:20.364 "abort": true, 00:22:20.364 "seek_hole": false, 00:22:20.364 "seek_data": false, 00:22:20.364 "copy": true, 00:22:20.364 "nvme_iov_md": false 00:22:20.364 }, 00:22:20.364 "memory_domains": [ 00:22:20.364 { 00:22:20.364 "dma_device_id": "system", 00:22:20.364 "dma_device_type": 1 00:22:20.364 } 00:22:20.364 ], 00:22:20.364 "driver_specific": { 00:22:20.364 "nvme": [ 00:22:20.364 { 00:22:20.364 "trid": { 00:22:20.364 "trtype": "TCP", 00:22:20.364 "adrfam": "IPv4", 00:22:20.364 "traddr": "10.0.0.2", 00:22:20.364 "trsvcid": "4420", 00:22:20.364 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:20.364 }, 00:22:20.364 "ctrlr_data": { 00:22:20.364 "cntlid": 2, 00:22:20.364 "vendor_id": "0x8086", 00:22:20.364 "model_number": "SPDK bdev Controller", 00:22:20.364 "serial_number": "00000000000000000000", 00:22:20.364 "firmware_revision": "24.09", 00:22:20.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.364 "oacs": { 00:22:20.364 "security": 0, 00:22:20.364 "format": 0, 00:22:20.364 "firmware": 0, 00:22:20.364 "ns_manage": 0 00:22:20.364 }, 00:22:20.364 "multi_ctrlr": true, 00:22:20.364 "ana_reporting": false 00:22:20.364 }, 00:22:20.364 "vs": { 00:22:20.364 "nvme_version": "1.3" 00:22:20.364 }, 00:22:20.364 "ns_data": { 00:22:20.364 "id": 1, 00:22:20.364 "can_share": true 00:22:20.364 } 00:22:20.364 } 00:22:20.364 ], 00:22:20.364 "mp_policy": "active_passive" 00:22:20.364 } 00:22:20.364 } 00:22:20.364 ] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.E0vjkKW4zr 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.E0vjkKW4zr 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 [2024-07-15 23:48:09.162041] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:20.364 [2024-07-15 23:48:09.162176] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.E0vjkKW4zr 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 [2024-07-15 23:48:09.170054] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.E0vjkKW4zr 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 [2024-07-15 23:48:09.178090] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.364 [2024-07-15 23:48:09.178126] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:20.364 nvme0n1 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.364 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.364 [ 00:22:20.364 { 00:22:20.365 "name": "nvme0n1", 00:22:20.365 "aliases": [ 00:22:20.365 "705ad85f-b5ed-4e28-83a8-16cfae6c7786" 00:22:20.365 ], 00:22:20.365 "product_name": "NVMe disk", 00:22:20.365 "block_size": 512, 00:22:20.365 "num_blocks": 2097152, 00:22:20.365 "uuid": "705ad85f-b5ed-4e28-83a8-16cfae6c7786", 00:22:20.365 "assigned_rate_limits": { 00:22:20.365 "rw_ios_per_sec": 0, 00:22:20.365 "rw_mbytes_per_sec": 0, 00:22:20.365 "r_mbytes_per_sec": 0, 00:22:20.365 "w_mbytes_per_sec": 0 00:22:20.365 }, 00:22:20.365 "claimed": false, 00:22:20.365 "zoned": false, 00:22:20.365 "supported_io_types": { 00:22:20.365 "read": true, 00:22:20.365 "write": true, 00:22:20.365 "unmap": false, 00:22:20.365 "flush": true, 00:22:20.365 "reset": true, 00:22:20.365 "nvme_admin": true, 00:22:20.365 "nvme_io": true, 00:22:20.365 "nvme_io_md": false, 00:22:20.365 "write_zeroes": true, 00:22:20.365 "zcopy": false, 00:22:20.365 "get_zone_info": false, 00:22:20.365 "zone_management": false, 00:22:20.365 "zone_append": false, 00:22:20.365 "compare": true, 00:22:20.365 "compare_and_write": true, 00:22:20.365 "abort": true, 00:22:20.365 "seek_hole": false, 00:22:20.365 "seek_data": false, 00:22:20.365 "copy": true, 00:22:20.365 "nvme_iov_md": false 00:22:20.365 }, 00:22:20.365 "memory_domains": [ 00:22:20.365 { 00:22:20.365 "dma_device_id": "system", 00:22:20.365 "dma_device_type": 1 00:22:20.365 } 00:22:20.365 ], 00:22:20.365 "driver_specific": { 00:22:20.365 "nvme": [ 00:22:20.365 { 00:22:20.365 "trid": { 00:22:20.365 "trtype": "TCP", 00:22:20.365 "adrfam": "IPv4", 00:22:20.365 "traddr": "10.0.0.2", 00:22:20.365 "trsvcid": "4421", 00:22:20.365 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:20.365 }, 00:22:20.365 "ctrlr_data": { 00:22:20.365 "cntlid": 3, 00:22:20.365 "vendor_id": "0x8086", 00:22:20.365 "model_number": "SPDK bdev Controller", 00:22:20.365 "serial_number": "00000000000000000000", 00:22:20.365 "firmware_revision": "24.09", 00:22:20.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:20.365 "oacs": { 00:22:20.365 "security": 0, 00:22:20.365 "format": 0, 00:22:20.365 "firmware": 0, 00:22:20.365 "ns_manage": 0 00:22:20.365 }, 00:22:20.365 "multi_ctrlr": true, 00:22:20.365 "ana_reporting": false 00:22:20.365 }, 00:22:20.365 "vs": { 00:22:20.365 "nvme_version": "1.3" 00:22:20.365 }, 00:22:20.365 "ns_data": { 00:22:20.365 "id": 1, 00:22:20.365 "can_share": true 00:22:20.365 } 00:22:20.365 } 00:22:20.365 ], 00:22:20.365 "mp_policy": "active_passive" 00:22:20.365 } 00:22:20.365 } 00:22:20.365 ] 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.E0vjkKW4zr 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:20.365 rmmod nvme_tcp 00:22:20.365 rmmod nvme_fabrics 00:22:20.365 rmmod nvme_keyring 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1082614 ']' 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1082614 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@942 -- # '[' -z 1082614 ']' 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # kill -0 1082614 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # uname 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:20.365 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1082614 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1082614' 00:22:20.624 killing process with pid 1082614 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@961 -- # kill 1082614 00:22:20.624 [2024-07-15 23:48:09.369406] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:20.624 [2024-07-15 23:48:09.369430] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # wait 1082614 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.624 23:48:09 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.158 23:48:11 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:23.158 00:22:23.158 real 0m8.815s 00:22:23.158 user 0m3.181s 00:22:23.158 sys 0m4.104s 00:22:23.158 23:48:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:23.158 23:48:11 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:23.158 ************************************ 00:22:23.158 END TEST nvmf_async_init 00:22:23.158 ************************************ 00:22:23.158 23:48:11 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:23.158 23:48:11 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:23.158 23:48:11 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:23.158 23:48:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:23.158 23:48:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.159 ************************************ 00:22:23.159 START TEST dma 00:22:23.159 ************************************ 00:22:23.159 23:48:11 nvmf_tcp.dma -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:23.159 * Looking for test storage... 00:22:23.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.159 23:48:11 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.159 23:48:11 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.159 23:48:11 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.159 23:48:11 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.159 23:48:11 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:22:23.159 23:48:11 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.159 23:48:11 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.159 23:48:11 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:23.159 23:48:11 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:22:23.159 00:22:23.159 real 0m0.110s 00:22:23.159 user 0m0.052s 00:22:23.159 sys 0m0.066s 00:22:23.159 23:48:11 nvmf_tcp.dma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:23.159 23:48:11 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:22:23.159 ************************************ 00:22:23.159 END TEST dma 00:22:23.159 ************************************ 00:22:23.159 23:48:11 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:23.159 23:48:11 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:23.159 23:48:11 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:23.159 23:48:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:23.159 23:48:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:23.159 ************************************ 00:22:23.159 START TEST nvmf_identify 00:22:23.159 ************************************ 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:23.159 * Looking for test storage... 00:22:23.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.159 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:23.160 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:23.160 23:48:11 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:23.160 23:48:11 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:28.431 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.431 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:28.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:28.432 Found net devices under 0000:86:00.0: cvl_0_0 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:28.432 Found net devices under 0000:86:00.1: cvl_0_1 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:22:28.432 00:22:28.432 --- 10.0.0.2 ping statistics --- 00:22:28.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.432 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:22:28.432 00:22:28.432 --- 10.0.0.1 ping statistics --- 00:22:28.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.432 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1086693 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1086693 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@823 -- # '[' -z 1086693 ']' 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:28.432 23:48:17 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:28.432 [2024-07-15 23:48:17.374644] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:28.432 [2024-07-15 23:48:17.374704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.691 [2024-07-15 23:48:17.432970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.691 [2024-07-15 23:48:17.514867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.691 [2024-07-15 23:48:17.514905] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.691 [2024-07-15 23:48:17.514912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.691 [2024-07-15 23:48:17.514919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.691 [2024-07-15 23:48:17.514924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.691 [2024-07-15 23:48:17.514964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.691 [2024-07-15 23:48:17.515081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.691 [2024-07-15 23:48:17.515167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.691 [2024-07-15 23:48:17.515168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # return 0 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.260 [2024-07-15 23:48:18.181121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.260 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.520 Malloc0 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.520 [2024-07-15 23:48:18.269259] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:29.520 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.521 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.521 [ 00:22:29.521 { 00:22:29.521 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:29.521 "subtype": "Discovery", 00:22:29.521 "listen_addresses": [ 00:22:29.521 { 00:22:29.521 "trtype": "TCP", 00:22:29.521 "adrfam": "IPv4", 00:22:29.521 "traddr": "10.0.0.2", 00:22:29.521 "trsvcid": "4420" 00:22:29.521 } 00:22:29.521 ], 00:22:29.521 "allow_any_host": true, 00:22:29.521 "hosts": [] 00:22:29.521 }, 00:22:29.521 { 00:22:29.521 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:29.521 "subtype": "NVMe", 00:22:29.521 "listen_addresses": [ 00:22:29.521 { 00:22:29.521 "trtype": "TCP", 00:22:29.521 "adrfam": "IPv4", 00:22:29.521 "traddr": "10.0.0.2", 00:22:29.521 "trsvcid": "4420" 00:22:29.521 } 00:22:29.521 ], 00:22:29.521 "allow_any_host": true, 00:22:29.521 "hosts": [], 00:22:29.521 "serial_number": "SPDK00000000000001", 00:22:29.521 "model_number": "SPDK bdev Controller", 00:22:29.521 "max_namespaces": 32, 00:22:29.521 "min_cntlid": 1, 00:22:29.521 "max_cntlid": 65519, 00:22:29.521 "namespaces": [ 00:22:29.521 { 00:22:29.521 "nsid": 1, 00:22:29.521 "bdev_name": "Malloc0", 00:22:29.521 "name": "Malloc0", 00:22:29.521 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:29.521 "eui64": "ABCDEF0123456789", 00:22:29.521 "uuid": "bb520398-974f-4ad8-a181-8720c7af10f8" 00:22:29.521 } 00:22:29.521 ] 00:22:29.521 } 00:22:29.521 ] 00:22:29.521 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.521 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:29.521 [2024-07-15 23:48:18.322657] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:29.521 [2024-07-15 23:48:18.322703] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086854 ] 00:22:29.521 [2024-07-15 23:48:18.353770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:29.521 [2024-07-15 23:48:18.353823] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:29.521 [2024-07-15 23:48:18.353828] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:29.521 [2024-07-15 23:48:18.353838] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:29.521 [2024-07-15 23:48:18.353844] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:29.521 [2024-07-15 23:48:18.354125] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:29.521 [2024-07-15 23:48:18.354153] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8c6ec0 0 00:22:29.521 [2024-07-15 23:48:18.368233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:29.521 [2024-07-15 23:48:18.368243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:29.521 [2024-07-15 23:48:18.368247] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:29.521 [2024-07-15 23:48:18.368250] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:29.521 [2024-07-15 23:48:18.368285] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.368290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.368294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.521 [2024-07-15 23:48:18.368306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:29.521 [2024-07-15 23:48:18.368320] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.521 [2024-07-15 23:48:18.376234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.521 [2024-07-15 23:48:18.376242] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.521 [2024-07-15 23:48:18.376245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.521 [2024-07-15 23:48:18.376258] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:29.521 [2024-07-15 23:48:18.376264] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:29.521 [2024-07-15 23:48:18.376268] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:29.521 [2024-07-15 23:48:18.376281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.521 [2024-07-15 23:48:18.376294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.521 [2024-07-15 23:48:18.376309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.521 [2024-07-15 23:48:18.376503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.521 [2024-07-15 23:48:18.376510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.521 [2024-07-15 23:48:18.376513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.521 [2024-07-15 23:48:18.376521] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:29.521 [2024-07-15 23:48:18.376528] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:29.521 [2024-07-15 23:48:18.376535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376542] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.521 [2024-07-15 23:48:18.376548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.521 [2024-07-15 23:48:18.376558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.521 [2024-07-15 23:48:18.376637] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.521 [2024-07-15 23:48:18.376643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.521 [2024-07-15 23:48:18.376646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376649] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.521 [2024-07-15 23:48:18.376654] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:29.521 [2024-07-15 23:48:18.376660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:29.521 [2024-07-15 23:48:18.376666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.521 [2024-07-15 23:48:18.376679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.521 [2024-07-15 23:48:18.376689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.521 [2024-07-15 23:48:18.376770] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.521 [2024-07-15 23:48:18.376776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.521 [2024-07-15 23:48:18.376778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376782] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.521 [2024-07-15 23:48:18.376786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:29.521 [2024-07-15 23:48:18.376794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.521 [2024-07-15 23:48:18.376807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.521 [2024-07-15 23:48:18.376816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.521 [2024-07-15 23:48:18.376892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.521 [2024-07-15 23:48:18.376901] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.521 [2024-07-15 23:48:18.376904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.376908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.521 [2024-07-15 23:48:18.376912] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:29.521 [2024-07-15 23:48:18.376916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:29.521 [2024-07-15 23:48:18.376922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:29.521 [2024-07-15 23:48:18.377027] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:29.521 [2024-07-15 23:48:18.377032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:29.521 [2024-07-15 23:48:18.377039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.377043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.377045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.521 [2024-07-15 23:48:18.377051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.521 [2024-07-15 23:48:18.377061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.521 [2024-07-15 23:48:18.377160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.521 [2024-07-15 23:48:18.377165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.521 [2024-07-15 23:48:18.377168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.377172] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.521 [2024-07-15 23:48:18.377176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:29.521 [2024-07-15 23:48:18.377183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.377187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.521 [2024-07-15 23:48:18.377190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.521 [2024-07-15 23:48:18.377196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.521 [2024-07-15 23:48:18.377205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.521 [2024-07-15 23:48:18.377286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.522 [2024-07-15 23:48:18.377292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.522 [2024-07-15 23:48:18.377295] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.377298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.522 [2024-07-15 23:48:18.377302] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:29.522 [2024-07-15 23:48:18.377306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:29.522 [2024-07-15 23:48:18.377312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:29.522 [2024-07-15 23:48:18.377323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:29.522 [2024-07-15 23:48:18.377332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.377338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.377344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.522 [2024-07-15 23:48:18.377355] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.522 [2024-07-15 23:48:18.377460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.522 [2024-07-15 23:48:18.377466] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.522 [2024-07-15 23:48:18.377469] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.377472] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8c6ec0): datao=0, datal=4096, cccid=0 00:22:29.522 [2024-07-15 23:48:18.377476] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x949e40) on tqpair(0x8c6ec0): expected_datao=0, payload_size=4096 00:22:29.522 [2024-07-15 23:48:18.377481] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.377511] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.377515] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.522 [2024-07-15 23:48:18.418390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.522 [2024-07-15 23:48:18.418393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.522 [2024-07-15 23:48:18.418405] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:29.522 [2024-07-15 23:48:18.418413] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:29.522 [2024-07-15 23:48:18.418417] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:29.522 [2024-07-15 23:48:18.418421] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:29.522 [2024-07-15 23:48:18.418425] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:29.522 [2024-07-15 23:48:18.418430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:29.522 [2024-07-15 23:48:18.418438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:29.522 [2024-07-15 23:48:18.418444] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.418459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.522 [2024-07-15 23:48:18.418471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.522 [2024-07-15 23:48:18.418553] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.522 [2024-07-15 23:48:18.418559] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.522 [2024-07-15 23:48:18.418562] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418565] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.522 [2024-07-15 23:48:18.418572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.418586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.522 [2024-07-15 23:48:18.418592] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418595] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.418603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.522 [2024-07-15 23:48:18.418609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.418620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.522 [2024-07-15 23:48:18.418625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418631] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.418636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.522 [2024-07-15 23:48:18.418641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:29.522 [2024-07-15 23:48:18.418651] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:29.522 [2024-07-15 23:48:18.418657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.418666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.522 [2024-07-15 23:48:18.418677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949e40, cid 0, qid 0 00:22:29.522 [2024-07-15 23:48:18.418682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x949fc0, cid 1, qid 0 00:22:29.522 [2024-07-15 23:48:18.418686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a140, cid 2, qid 0 00:22:29.522 [2024-07-15 23:48:18.418689] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.522 [2024-07-15 23:48:18.418693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a440, cid 4, qid 0 00:22:29.522 [2024-07-15 23:48:18.418827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.522 [2024-07-15 23:48:18.418833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.522 [2024-07-15 23:48:18.418836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a440) on tqpair=0x8c6ec0 00:22:29.522 [2024-07-15 23:48:18.418844] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:29.522 [2024-07-15 23:48:18.418849] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:29.522 [2024-07-15 23:48:18.418858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418862] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.418867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.522 [2024-07-15 23:48:18.418879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a440, cid 4, qid 0 00:22:29.522 [2024-07-15 23:48:18.418969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.522 [2024-07-15 23:48:18.418975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.522 [2024-07-15 23:48:18.418978] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418981] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8c6ec0): datao=0, datal=4096, cccid=4 00:22:29.522 [2024-07-15 23:48:18.418985] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94a440) on tqpair(0x8c6ec0): expected_datao=0, payload_size=4096 00:22:29.522 [2024-07-15 23:48:18.418989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418995] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.418998] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419027] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.522 [2024-07-15 23:48:18.419032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.522 [2024-07-15 23:48:18.419035] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419039] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a440) on tqpair=0x8c6ec0 00:22:29.522 [2024-07-15 23:48:18.419049] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:29.522 [2024-07-15 23:48:18.419071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.419081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.522 [2024-07-15 23:48:18.419087] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8c6ec0) 00:22:29.522 [2024-07-15 23:48:18.419099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.522 [2024-07-15 23:48:18.419112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a440, cid 4, qid 0 00:22:29.522 [2024-07-15 23:48:18.419116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a5c0, cid 5, qid 0 00:22:29.522 [2024-07-15 23:48:18.419258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.522 [2024-07-15 23:48:18.419264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.522 [2024-07-15 23:48:18.419267] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419270] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8c6ec0): datao=0, datal=1024, cccid=4 00:22:29.522 [2024-07-15 23:48:18.419274] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94a440) on tqpair(0x8c6ec0): expected_datao=0, payload_size=1024 00:22:29.522 [2024-07-15 23:48:18.419278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419284] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419287] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.522 [2024-07-15 23:48:18.419292] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.522 [2024-07-15 23:48:18.419296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.523 [2024-07-15 23:48:18.419300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.419303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a5c0) on tqpair=0x8c6ec0 00:22:29.523 [2024-07-15 23:48:18.464236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.523 [2024-07-15 23:48:18.464247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.523 [2024-07-15 23:48:18.464254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a440) on tqpair=0x8c6ec0 00:22:29.523 [2024-07-15 23:48:18.464275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8c6ec0) 00:22:29.523 [2024-07-15 23:48:18.464286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.523 [2024-07-15 23:48:18.464302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a440, cid 4, qid 0 00:22:29.523 [2024-07-15 23:48:18.464478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.523 [2024-07-15 23:48:18.464484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.523 [2024-07-15 23:48:18.464487] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464491] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8c6ec0): datao=0, datal=3072, cccid=4 00:22:29.523 [2024-07-15 23:48:18.464495] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94a440) on tqpair(0x8c6ec0): expected_datao=0, payload_size=3072 00:22:29.523 [2024-07-15 23:48:18.464499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464505] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464509] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.523 [2024-07-15 23:48:18.464577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.523 [2024-07-15 23:48:18.464580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a440) on tqpair=0x8c6ec0 00:22:29.523 [2024-07-15 23:48:18.464591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8c6ec0) 00:22:29.523 [2024-07-15 23:48:18.464601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.523 [2024-07-15 23:48:18.464614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a440, cid 4, qid 0 00:22:29.523 [2024-07-15 23:48:18.464698] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.523 [2024-07-15 23:48:18.464704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.523 [2024-07-15 23:48:18.464707] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464710] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8c6ec0): datao=0, datal=8, cccid=4 00:22:29.523 [2024-07-15 23:48:18.464714] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x94a440) on tqpair(0x8c6ec0): expected_datao=0, payload_size=8 00:22:29.523 [2024-07-15 23:48:18.464718] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464723] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.523 [2024-07-15 23:48:18.464727] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.785 [2024-07-15 23:48:18.505385] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.785 [2024-07-15 23:48:18.505398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.785 [2024-07-15 23:48:18.505401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.785 [2024-07-15 23:48:18.505405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a440) on tqpair=0x8c6ec0 00:22:29.785 ===================================================== 00:22:29.785 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:29.785 ===================================================== 00:22:29.785 Controller Capabilities/Features 00:22:29.785 ================================ 00:22:29.785 Vendor ID: 0000 00:22:29.785 Subsystem Vendor ID: 0000 00:22:29.785 Serial Number: .................... 00:22:29.785 Model Number: ........................................ 00:22:29.785 Firmware Version: 24.09 00:22:29.785 Recommended Arb Burst: 0 00:22:29.785 IEEE OUI Identifier: 00 00 00 00:22:29.785 Multi-path I/O 00:22:29.785 May have multiple subsystem ports: No 00:22:29.785 May have multiple controllers: No 00:22:29.785 Associated with SR-IOV VF: No 00:22:29.785 Max Data Transfer Size: 131072 00:22:29.785 Max Number of Namespaces: 0 00:22:29.785 Max Number of I/O Queues: 1024 00:22:29.785 NVMe Specification Version (VS): 1.3 00:22:29.785 NVMe Specification Version (Identify): 1.3 00:22:29.785 Maximum Queue Entries: 128 00:22:29.785 Contiguous Queues Required: Yes 00:22:29.785 Arbitration Mechanisms Supported 00:22:29.785 Weighted Round Robin: Not Supported 00:22:29.785 Vendor Specific: Not Supported 00:22:29.785 Reset Timeout: 15000 ms 00:22:29.785 Doorbell Stride: 4 bytes 00:22:29.785 NVM Subsystem Reset: Not Supported 00:22:29.785 Command Sets Supported 00:22:29.785 NVM Command Set: Supported 00:22:29.785 Boot Partition: Not Supported 00:22:29.785 Memory Page Size Minimum: 4096 bytes 00:22:29.785 Memory Page Size Maximum: 4096 bytes 00:22:29.785 Persistent Memory Region: Not Supported 00:22:29.785 Optional Asynchronous Events Supported 00:22:29.785 Namespace Attribute Notices: Not Supported 00:22:29.785 Firmware Activation Notices: Not Supported 00:22:29.785 ANA Change Notices: Not Supported 00:22:29.785 PLE Aggregate Log Change Notices: Not Supported 00:22:29.785 LBA Status Info Alert Notices: Not Supported 00:22:29.785 EGE Aggregate Log Change Notices: Not Supported 00:22:29.785 Normal NVM Subsystem Shutdown event: Not Supported 00:22:29.785 Zone Descriptor Change Notices: Not Supported 00:22:29.785 Discovery Log Change Notices: Supported 00:22:29.785 Controller Attributes 00:22:29.785 128-bit Host Identifier: Not Supported 00:22:29.785 Non-Operational Permissive Mode: Not Supported 00:22:29.785 NVM Sets: Not Supported 00:22:29.785 Read Recovery Levels: Not Supported 00:22:29.785 Endurance Groups: Not Supported 00:22:29.785 Predictable Latency Mode: Not Supported 00:22:29.785 Traffic Based Keep ALive: Not Supported 00:22:29.785 Namespace Granularity: Not Supported 00:22:29.785 SQ Associations: Not Supported 00:22:29.785 UUID List: Not Supported 00:22:29.785 Multi-Domain Subsystem: Not Supported 00:22:29.785 Fixed Capacity Management: Not Supported 00:22:29.785 Variable Capacity Management: Not Supported 00:22:29.785 Delete Endurance Group: Not Supported 00:22:29.785 Delete NVM Set: Not Supported 00:22:29.785 Extended LBA Formats Supported: Not Supported 00:22:29.785 Flexible Data Placement Supported: Not Supported 00:22:29.785 00:22:29.785 Controller Memory Buffer Support 00:22:29.785 ================================ 00:22:29.785 Supported: No 00:22:29.785 00:22:29.785 Persistent Memory Region Support 00:22:29.785 ================================ 00:22:29.785 Supported: No 00:22:29.785 00:22:29.785 Admin Command Set Attributes 00:22:29.785 ============================ 00:22:29.785 Security Send/Receive: Not Supported 00:22:29.785 Format NVM: Not Supported 00:22:29.785 Firmware Activate/Download: Not Supported 00:22:29.785 Namespace Management: Not Supported 00:22:29.785 Device Self-Test: Not Supported 00:22:29.785 Directives: Not Supported 00:22:29.785 NVMe-MI: Not Supported 00:22:29.785 Virtualization Management: Not Supported 00:22:29.785 Doorbell Buffer Config: Not Supported 00:22:29.785 Get LBA Status Capability: Not Supported 00:22:29.785 Command & Feature Lockdown Capability: Not Supported 00:22:29.785 Abort Command Limit: 1 00:22:29.785 Async Event Request Limit: 4 00:22:29.785 Number of Firmware Slots: N/A 00:22:29.785 Firmware Slot 1 Read-Only: N/A 00:22:29.785 Firmware Activation Without Reset: N/A 00:22:29.785 Multiple Update Detection Support: N/A 00:22:29.785 Firmware Update Granularity: No Information Provided 00:22:29.785 Per-Namespace SMART Log: No 00:22:29.785 Asymmetric Namespace Access Log Page: Not Supported 00:22:29.785 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:29.785 Command Effects Log Page: Not Supported 00:22:29.785 Get Log Page Extended Data: Supported 00:22:29.785 Telemetry Log Pages: Not Supported 00:22:29.785 Persistent Event Log Pages: Not Supported 00:22:29.785 Supported Log Pages Log Page: May Support 00:22:29.785 Commands Supported & Effects Log Page: Not Supported 00:22:29.785 Feature Identifiers & Effects Log Page:May Support 00:22:29.785 NVMe-MI Commands & Effects Log Page: May Support 00:22:29.785 Data Area 4 for Telemetry Log: Not Supported 00:22:29.785 Error Log Page Entries Supported: 128 00:22:29.785 Keep Alive: Not Supported 00:22:29.785 00:22:29.785 NVM Command Set Attributes 00:22:29.785 ========================== 00:22:29.785 Submission Queue Entry Size 00:22:29.785 Max: 1 00:22:29.785 Min: 1 00:22:29.785 Completion Queue Entry Size 00:22:29.785 Max: 1 00:22:29.785 Min: 1 00:22:29.785 Number of Namespaces: 0 00:22:29.785 Compare Command: Not Supported 00:22:29.785 Write Uncorrectable Command: Not Supported 00:22:29.785 Dataset Management Command: Not Supported 00:22:29.786 Write Zeroes Command: Not Supported 00:22:29.786 Set Features Save Field: Not Supported 00:22:29.786 Reservations: Not Supported 00:22:29.786 Timestamp: Not Supported 00:22:29.786 Copy: Not Supported 00:22:29.786 Volatile Write Cache: Not Present 00:22:29.786 Atomic Write Unit (Normal): 1 00:22:29.786 Atomic Write Unit (PFail): 1 00:22:29.786 Atomic Compare & Write Unit: 1 00:22:29.786 Fused Compare & Write: Supported 00:22:29.786 Scatter-Gather List 00:22:29.786 SGL Command Set: Supported 00:22:29.786 SGL Keyed: Supported 00:22:29.786 SGL Bit Bucket Descriptor: Not Supported 00:22:29.786 SGL Metadata Pointer: Not Supported 00:22:29.786 Oversized SGL: Not Supported 00:22:29.786 SGL Metadata Address: Not Supported 00:22:29.786 SGL Offset: Supported 00:22:29.786 Transport SGL Data Block: Not Supported 00:22:29.786 Replay Protected Memory Block: Not Supported 00:22:29.786 00:22:29.786 Firmware Slot Information 00:22:29.786 ========================= 00:22:29.786 Active slot: 0 00:22:29.786 00:22:29.786 00:22:29.786 Error Log 00:22:29.786 ========= 00:22:29.786 00:22:29.786 Active Namespaces 00:22:29.786 ================= 00:22:29.786 Discovery Log Page 00:22:29.786 ================== 00:22:29.786 Generation Counter: 2 00:22:29.786 Number of Records: 2 00:22:29.786 Record Format: 0 00:22:29.786 00:22:29.786 Discovery Log Entry 0 00:22:29.786 ---------------------- 00:22:29.786 Transport Type: 3 (TCP) 00:22:29.786 Address Family: 1 (IPv4) 00:22:29.786 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:29.786 Entry Flags: 00:22:29.786 Duplicate Returned Information: 1 00:22:29.786 Explicit Persistent Connection Support for Discovery: 1 00:22:29.786 Transport Requirements: 00:22:29.786 Secure Channel: Not Required 00:22:29.786 Port ID: 0 (0x0000) 00:22:29.786 Controller ID: 65535 (0xffff) 00:22:29.786 Admin Max SQ Size: 128 00:22:29.786 Transport Service Identifier: 4420 00:22:29.786 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:29.786 Transport Address: 10.0.0.2 00:22:29.786 Discovery Log Entry 1 00:22:29.786 ---------------------- 00:22:29.786 Transport Type: 3 (TCP) 00:22:29.786 Address Family: 1 (IPv4) 00:22:29.786 Subsystem Type: 2 (NVM Subsystem) 00:22:29.786 Entry Flags: 00:22:29.786 Duplicate Returned Information: 0 00:22:29.786 Explicit Persistent Connection Support for Discovery: 0 00:22:29.786 Transport Requirements: 00:22:29.786 Secure Channel: Not Required 00:22:29.786 Port ID: 0 (0x0000) 00:22:29.786 Controller ID: 65535 (0xffff) 00:22:29.786 Admin Max SQ Size: 128 00:22:29.786 Transport Service Identifier: 4420 00:22:29.786 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:29.786 Transport Address: 10.0.0.2 [2024-07-15 23:48:18.505482] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:29.786 [2024-07-15 23:48:18.505492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949e40) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.505499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.786 [2024-07-15 23:48:18.505504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x949fc0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.505508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.786 [2024-07-15 23:48:18.505512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a140) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.505516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.786 [2024-07-15 23:48:18.505521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.505524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.786 [2024-07-15 23:48:18.505534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.786 [2024-07-15 23:48:18.505547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.786 [2024-07-15 23:48:18.505560] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.786 [2024-07-15 23:48:18.505642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.786 [2024-07-15 23:48:18.505648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.786 [2024-07-15 23:48:18.505651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.505660] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505667] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.786 [2024-07-15 23:48:18.505673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.786 [2024-07-15 23:48:18.505685] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.786 [2024-07-15 23:48:18.505772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.786 [2024-07-15 23:48:18.505778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.786 [2024-07-15 23:48:18.505781] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.505788] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:29.786 [2024-07-15 23:48:18.505792] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:29.786 [2024-07-15 23:48:18.505800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505804] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.786 [2024-07-15 23:48:18.505812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.786 [2024-07-15 23:48:18.505821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.786 [2024-07-15 23:48:18.505904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.786 [2024-07-15 23:48:18.505910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.786 [2024-07-15 23:48:18.505914] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.505927] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505930] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.505933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.786 [2024-07-15 23:48:18.505939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.786 [2024-07-15 23:48:18.505948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.786 [2024-07-15 23:48:18.506025] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.786 [2024-07-15 23:48:18.506030] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.786 [2024-07-15 23:48:18.506033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506037] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.506045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506051] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.786 [2024-07-15 23:48:18.506057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.786 [2024-07-15 23:48:18.506066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.786 [2024-07-15 23:48:18.506144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.786 [2024-07-15 23:48:18.506149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.786 [2024-07-15 23:48:18.506152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.506164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.786 [2024-07-15 23:48:18.506176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.786 [2024-07-15 23:48:18.506185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.786 [2024-07-15 23:48:18.506269] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.786 [2024-07-15 23:48:18.506276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.786 [2024-07-15 23:48:18.506279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506282] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.506290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.786 [2024-07-15 23:48:18.506303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.786 [2024-07-15 23:48:18.506313] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.786 [2024-07-15 23:48:18.506392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.786 [2024-07-15 23:48:18.506398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.786 [2024-07-15 23:48:18.506401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506404] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.786 [2024-07-15 23:48:18.506414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.786 [2024-07-15 23:48:18.506418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.506426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.506435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.506513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.506518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.506521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.506532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.506544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.506554] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.506634] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.506640] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.506643] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.506654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.506666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.506676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.506813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.506818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.506821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.506833] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506840] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.506846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.506856] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.506935] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.506941] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.506944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.506957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506961] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.506964] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.506969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.506978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507057] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507065] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507077] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507080] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.507089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.507098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507175] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507202] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.507208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.507217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507304] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507316] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507319] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.507328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.507338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507424] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507436] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.507452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.507460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507538] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507550] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507561] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.507570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.507579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507668] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507671] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.507691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.507700] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507779] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.787 [2024-07-15 23:48:18.507811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.787 [2024-07-15 23:48:18.507820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.787 [2024-07-15 23:48:18.507897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.787 [2024-07-15 23:48:18.507903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.787 [2024-07-15 23:48:18.507906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.787 [2024-07-15 23:48:18.507917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.787 [2024-07-15 23:48:18.507921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.507924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.788 [2024-07-15 23:48:18.507931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.507940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.788 [2024-07-15 23:48:18.508018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.508024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.508026] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.508030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.788 [2024-07-15 23:48:18.508038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.508041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.508044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.788 [2024-07-15 23:48:18.508050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.508059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.788 [2024-07-15 23:48:18.508137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.508143] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.508146] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.508150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.788 [2024-07-15 23:48:18.508157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.508161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.508164] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.788 [2024-07-15 23:48:18.508170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.508178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.788 [2024-07-15 23:48:18.512234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.512243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.512246] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.512249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.788 [2024-07-15 23:48:18.512259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.512263] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.512266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8c6ec0) 00:22:29.788 [2024-07-15 23:48:18.512272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.512283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x94a2c0, cid 3, qid 0 00:22:29.788 [2024-07-15 23:48:18.512453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.512459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.512462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.512465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x94a2c0) on tqpair=0x8c6ec0 00:22:29.788 [2024-07-15 23:48:18.512471] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:29.788 00:22:29.788 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:29.788 [2024-07-15 23:48:18.548404] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:29.788 [2024-07-15 23:48:18.548437] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086857 ] 00:22:29.788 [2024-07-15 23:48:18.578483] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:29.788 [2024-07-15 23:48:18.578526] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:29.788 [2024-07-15 23:48:18.578531] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:29.788 [2024-07-15 23:48:18.578541] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:29.788 [2024-07-15 23:48:18.578547] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:29.788 [2024-07-15 23:48:18.578821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:29.788 [2024-07-15 23:48:18.578846] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1170ec0 0 00:22:29.788 [2024-07-15 23:48:18.589236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:29.788 [2024-07-15 23:48:18.589253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:29.788 [2024-07-15 23:48:18.589256] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:29.788 [2024-07-15 23:48:18.589260] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:29.788 [2024-07-15 23:48:18.589288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.589294] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.589297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.788 [2024-07-15 23:48:18.589308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:29.788 [2024-07-15 23:48:18.589323] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.788 [2024-07-15 23:48:18.597237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.597247] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.597250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.788 [2024-07-15 23:48:18.597265] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:29.788 [2024-07-15 23:48:18.597270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:29.788 [2024-07-15 23:48:18.597275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:29.788 [2024-07-15 23:48:18.597284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597289] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.788 [2024-07-15 23:48:18.597298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.597311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.788 [2024-07-15 23:48:18.597502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.597508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.597513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.788 [2024-07-15 23:48:18.597521] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:29.788 [2024-07-15 23:48:18.597528] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:29.788 [2024-07-15 23:48:18.597534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.788 [2024-07-15 23:48:18.597546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.597556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.788 [2024-07-15 23:48:18.597641] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.597647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.597650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.788 [2024-07-15 23:48:18.597658] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:29.788 [2024-07-15 23:48:18.597664] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:29.788 [2024-07-15 23:48:18.597670] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597674] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.788 [2024-07-15 23:48:18.597683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.597692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.788 [2024-07-15 23:48:18.597771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.597777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.597780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.788 [2024-07-15 23:48:18.597787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:29.788 [2024-07-15 23:48:18.597795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597801] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.788 [2024-07-15 23:48:18.597807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.788 [2024-07-15 23:48:18.597817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.788 [2024-07-15 23:48:18.597893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.788 [2024-07-15 23:48:18.597899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.788 [2024-07-15 23:48:18.597902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.788 [2024-07-15 23:48:18.597905] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.788 [2024-07-15 23:48:18.597909] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:29.788 [2024-07-15 23:48:18.597916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:29.788 [2024-07-15 23:48:18.597922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:29.788 [2024-07-15 23:48:18.598027] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:29.789 [2024-07-15 23:48:18.598031] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:29.789 [2024-07-15 23:48:18.598038] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598044] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.598050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.789 [2024-07-15 23:48:18.598060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.789 [2024-07-15 23:48:18.598136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.789 [2024-07-15 23:48:18.598142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.789 [2024-07-15 23:48:18.598145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.789 [2024-07-15 23:48:18.598152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:29.789 [2024-07-15 23:48:18.598160] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598167] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.598173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.789 [2024-07-15 23:48:18.598183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.789 [2024-07-15 23:48:18.598262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.789 [2024-07-15 23:48:18.598269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.789 [2024-07-15 23:48:18.598272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.789 [2024-07-15 23:48:18.598279] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:29.789 [2024-07-15 23:48:18.598283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:29.789 [2024-07-15 23:48:18.598290] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:29.789 [2024-07-15 23:48:18.598301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:29.789 [2024-07-15 23:48:18.598309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.598318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.789 [2024-07-15 23:48:18.598329] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.789 [2024-07-15 23:48:18.598452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.789 [2024-07-15 23:48:18.598458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.789 [2024-07-15 23:48:18.598461] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598465] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=4096, cccid=0 00:22:29.789 [2024-07-15 23:48:18.598469] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f3e40) on tqpair(0x1170ec0): expected_datao=0, payload_size=4096 00:22:29.789 [2024-07-15 23:48:18.598472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598502] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.598506] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.789 [2024-07-15 23:48:18.641246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.789 [2024-07-15 23:48:18.641249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641253] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.789 [2024-07-15 23:48:18.641259] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:29.789 [2024-07-15 23:48:18.641267] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:29.789 [2024-07-15 23:48:18.641271] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:29.789 [2024-07-15 23:48:18.641274] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:29.789 [2024-07-15 23:48:18.641278] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:29.789 [2024-07-15 23:48:18.641282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:29.789 [2024-07-15 23:48:18.641291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:29.789 [2024-07-15 23:48:18.641298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641301] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.641311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.789 [2024-07-15 23:48:18.641324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.789 [2024-07-15 23:48:18.641404] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.789 [2024-07-15 23:48:18.641410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.789 [2024-07-15 23:48:18.641413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641416] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.789 [2024-07-15 23:48:18.641422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.641434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.789 [2024-07-15 23:48:18.641440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.641454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.789 [2024-07-15 23:48:18.641459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641463] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.641471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.789 [2024-07-15 23:48:18.641476] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.641487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.789 [2024-07-15 23:48:18.641491] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:29.789 [2024-07-15 23:48:18.641502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:29.789 [2024-07-15 23:48:18.641508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1170ec0) 00:22:29.789 [2024-07-15 23:48:18.641516] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.789 [2024-07-15 23:48:18.641528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3e40, cid 0, qid 0 00:22:29.789 [2024-07-15 23:48:18.641532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f3fc0, cid 1, qid 0 00:22:29.789 [2024-07-15 23:48:18.641536] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4140, cid 2, qid 0 00:22:29.789 [2024-07-15 23:48:18.641540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f42c0, cid 3, qid 0 00:22:29.789 [2024-07-15 23:48:18.641544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4440, cid 4, qid 0 00:22:29.789 [2024-07-15 23:48:18.641661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.789 [2024-07-15 23:48:18.641667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.789 [2024-07-15 23:48:18.641671] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.789 [2024-07-15 23:48:18.641674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4440) on tqpair=0x1170ec0 00:22:29.789 [2024-07-15 23:48:18.641677] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:29.790 [2024-07-15 23:48:18.641682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.641688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.641694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.641699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.641703] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.641706] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.641711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.790 [2024-07-15 23:48:18.641721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4440, cid 4, qid 0 00:22:29.790 [2024-07-15 23:48:18.641802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.641808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.790 [2024-07-15 23:48:18.641811] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.641814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4440) on tqpair=0x1170ec0 00:22:29.790 [2024-07-15 23:48:18.641866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.641875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.641882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.641885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.641891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.790 [2024-07-15 23:48:18.641901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4440, cid 4, qid 0 00:22:29.790 [2024-07-15 23:48:18.641994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.790 [2024-07-15 23:48:18.642000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.790 [2024-07-15 23:48:18.642003] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642006] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=4096, cccid=4 00:22:29.790 [2024-07-15 23:48:18.642010] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4440) on tqpair(0x1170ec0): expected_datao=0, payload_size=4096 00:22:29.790 [2024-07-15 23:48:18.642014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642020] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642023] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.642100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.790 [2024-07-15 23:48:18.642102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4440) on tqpair=0x1170ec0 00:22:29.790 [2024-07-15 23:48:18.642113] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:29.790 [2024-07-15 23:48:18.642122] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.642146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.790 [2024-07-15 23:48:18.642157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4440, cid 4, qid 0 00:22:29.790 [2024-07-15 23:48:18.642276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.790 [2024-07-15 23:48:18.642282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.790 [2024-07-15 23:48:18.642285] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642288] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=4096, cccid=4 00:22:29.790 [2024-07-15 23:48:18.642292] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4440) on tqpair(0x1170ec0): expected_datao=0, payload_size=4096 00:22:29.790 [2024-07-15 23:48:18.642298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642304] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642308] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.642381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.790 [2024-07-15 23:48:18.642383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4440) on tqpair=0x1170ec0 00:22:29.790 [2024-07-15 23:48:18.642398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642413] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.642422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.790 [2024-07-15 23:48:18.642433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4440, cid 4, qid 0 00:22:29.790 [2024-07-15 23:48:18.642528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.790 [2024-07-15 23:48:18.642534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.790 [2024-07-15 23:48:18.642537] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642540] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=4096, cccid=4 00:22:29.790 [2024-07-15 23:48:18.642544] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4440) on tqpair(0x1170ec0): expected_datao=0, payload_size=4096 00:22:29.790 [2024-07-15 23:48:18.642547] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642553] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642556] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642583] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.642588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.790 [2024-07-15 23:48:18.642591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4440) on tqpair=0x1170ec0 00:22:29.790 [2024-07-15 23:48:18.642600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642630] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642635] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:29.790 [2024-07-15 23:48:18.642640] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:29.790 [2024-07-15 23:48:18.642645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:29.790 [2024-07-15 23:48:18.642657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.642667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.790 [2024-07-15 23:48:18.642672] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.642684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:29.790 [2024-07-15 23:48:18.642697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4440, cid 4, qid 0 00:22:29.790 [2024-07-15 23:48:18.642701] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f45c0, cid 5, qid 0 00:22:29.790 [2024-07-15 23:48:18.642848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.642853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.790 [2024-07-15 23:48:18.642856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4440) on tqpair=0x1170ec0 00:22:29.790 [2024-07-15 23:48:18.642865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.642870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.790 [2024-07-15 23:48:18.642873] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f45c0) on tqpair=0x1170ec0 00:22:29.790 [2024-07-15 23:48:18.642885] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.642894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.790 [2024-07-15 23:48:18.642904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f45c0, cid 5, qid 0 00:22:29.790 [2024-07-15 23:48:18.642983] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.642988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.790 [2024-07-15 23:48:18.642991] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.642994] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f45c0) on tqpair=0x1170ec0 00:22:29.790 [2024-07-15 23:48:18.643002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.790 [2024-07-15 23:48:18.643006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1170ec0) 00:22:29.790 [2024-07-15 23:48:18.643011] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.790 [2024-07-15 23:48:18.643020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f45c0, cid 5, qid 0 00:22:29.790 [2024-07-15 23:48:18.643107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.790 [2024-07-15 23:48:18.643113] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.791 [2024-07-15 23:48:18.643116] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f45c0) on tqpair=0x1170ec0 00:22:29.791 [2024-07-15 23:48:18.643128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1170ec0) 00:22:29.791 [2024-07-15 23:48:18.643138] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.791 [2024-07-15 23:48:18.643147] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f45c0, cid 5, qid 0 00:22:29.791 [2024-07-15 23:48:18.643240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.791 [2024-07-15 23:48:18.643246] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.791 [2024-07-15 23:48:18.643249] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f45c0) on tqpair=0x1170ec0 00:22:29.791 [2024-07-15 23:48:18.643265] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1170ec0) 00:22:29.791 [2024-07-15 23:48:18.643275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.791 [2024-07-15 23:48:18.643281] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1170ec0) 00:22:29.791 [2024-07-15 23:48:18.643289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.791 [2024-07-15 23:48:18.643296] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1170ec0) 00:22:29.791 [2024-07-15 23:48:18.643304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.791 [2024-07-15 23:48:18.643310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1170ec0) 00:22:29.791 [2024-07-15 23:48:18.643319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.791 [2024-07-15 23:48:18.643330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f45c0, cid 5, qid 0 00:22:29.791 [2024-07-15 23:48:18.643334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4440, cid 4, qid 0 00:22:29.791 [2024-07-15 23:48:18.643338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f4740, cid 6, qid 0 00:22:29.791 [2024-07-15 23:48:18.643342] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f48c0, cid 7, qid 0 00:22:29.791 [2024-07-15 23:48:18.643496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.791 [2024-07-15 23:48:18.643501] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.791 [2024-07-15 23:48:18.643505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643508] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=8192, cccid=5 00:22:29.791 [2024-07-15 23:48:18.643512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f45c0) on tqpair(0x1170ec0): expected_datao=0, payload_size=8192 00:22:29.791 [2024-07-15 23:48:18.643515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643570] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643574] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.791 [2024-07-15 23:48:18.643587] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.791 [2024-07-15 23:48:18.643592] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643595] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=512, cccid=4 00:22:29.791 [2024-07-15 23:48:18.643599] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4440) on tqpair(0x1170ec0): expected_datao=0, payload_size=512 00:22:29.791 [2024-07-15 23:48:18.643603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643608] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.791 [2024-07-15 23:48:18.643620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.791 [2024-07-15 23:48:18.643623] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643626] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=512, cccid=6 00:22:29.791 [2024-07-15 23:48:18.643630] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f4740) on tqpair(0x1170ec0): expected_datao=0, payload_size=512 00:22:29.791 [2024-07-15 23:48:18.643634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643639] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643642] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:29.791 [2024-07-15 23:48:18.643651] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:29.791 [2024-07-15 23:48:18.643654] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643657] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1170ec0): datao=0, datal=4096, cccid=7 00:22:29.791 [2024-07-15 23:48:18.643661] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11f48c0) on tqpair(0x1170ec0): expected_datao=0, payload_size=4096 00:22:29.791 [2024-07-15 23:48:18.643665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643670] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643673] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.791 [2024-07-15 23:48:18.643687] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.791 [2024-07-15 23:48:18.643690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f45c0) on tqpair=0x1170ec0 00:22:29.791 [2024-07-15 23:48:18.643704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.791 [2024-07-15 23:48:18.643709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.791 [2024-07-15 23:48:18.643712] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4440) on tqpair=0x1170ec0 00:22:29.791 [2024-07-15 23:48:18.643723] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.791 [2024-07-15 23:48:18.643728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.791 [2024-07-15 23:48:18.643731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4740) on tqpair=0x1170ec0 00:22:29.791 [2024-07-15 23:48:18.643740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.791 [2024-07-15 23:48:18.643745] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.791 [2024-07-15 23:48:18.643748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.791 [2024-07-15 23:48:18.643751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f48c0) on tqpair=0x1170ec0 00:22:29.791 ===================================================== 00:22:29.791 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.791 ===================================================== 00:22:29.791 Controller Capabilities/Features 00:22:29.791 ================================ 00:22:29.791 Vendor ID: 8086 00:22:29.791 Subsystem Vendor ID: 8086 00:22:29.791 Serial Number: SPDK00000000000001 00:22:29.791 Model Number: SPDK bdev Controller 00:22:29.791 Firmware Version: 24.09 00:22:29.791 Recommended Arb Burst: 6 00:22:29.791 IEEE OUI Identifier: e4 d2 5c 00:22:29.791 Multi-path I/O 00:22:29.791 May have multiple subsystem ports: Yes 00:22:29.791 May have multiple controllers: Yes 00:22:29.791 Associated with SR-IOV VF: No 00:22:29.791 Max Data Transfer Size: 131072 00:22:29.791 Max Number of Namespaces: 32 00:22:29.791 Max Number of I/O Queues: 127 00:22:29.791 NVMe Specification Version (VS): 1.3 00:22:29.791 NVMe Specification Version (Identify): 1.3 00:22:29.791 Maximum Queue Entries: 128 00:22:29.791 Contiguous Queues Required: Yes 00:22:29.791 Arbitration Mechanisms Supported 00:22:29.791 Weighted Round Robin: Not Supported 00:22:29.791 Vendor Specific: Not Supported 00:22:29.791 Reset Timeout: 15000 ms 00:22:29.791 Doorbell Stride: 4 bytes 00:22:29.791 NVM Subsystem Reset: Not Supported 00:22:29.791 Command Sets Supported 00:22:29.791 NVM Command Set: Supported 00:22:29.791 Boot Partition: Not Supported 00:22:29.791 Memory Page Size Minimum: 4096 bytes 00:22:29.791 Memory Page Size Maximum: 4096 bytes 00:22:29.791 Persistent Memory Region: Not Supported 00:22:29.791 Optional Asynchronous Events Supported 00:22:29.791 Namespace Attribute Notices: Supported 00:22:29.791 Firmware Activation Notices: Not Supported 00:22:29.791 ANA Change Notices: Not Supported 00:22:29.791 PLE Aggregate Log Change Notices: Not Supported 00:22:29.791 LBA Status Info Alert Notices: Not Supported 00:22:29.791 EGE Aggregate Log Change Notices: Not Supported 00:22:29.791 Normal NVM Subsystem Shutdown event: Not Supported 00:22:29.791 Zone Descriptor Change Notices: Not Supported 00:22:29.791 Discovery Log Change Notices: Not Supported 00:22:29.791 Controller Attributes 00:22:29.791 128-bit Host Identifier: Supported 00:22:29.791 Non-Operational Permissive Mode: Not Supported 00:22:29.791 NVM Sets: Not Supported 00:22:29.791 Read Recovery Levels: Not Supported 00:22:29.791 Endurance Groups: Not Supported 00:22:29.791 Predictable Latency Mode: Not Supported 00:22:29.791 Traffic Based Keep ALive: Not Supported 00:22:29.791 Namespace Granularity: Not Supported 00:22:29.791 SQ Associations: Not Supported 00:22:29.791 UUID List: Not Supported 00:22:29.791 Multi-Domain Subsystem: Not Supported 00:22:29.791 Fixed Capacity Management: Not Supported 00:22:29.791 Variable Capacity Management: Not Supported 00:22:29.791 Delete Endurance Group: Not Supported 00:22:29.791 Delete NVM Set: Not Supported 00:22:29.791 Extended LBA Formats Supported: Not Supported 00:22:29.791 Flexible Data Placement Supported: Not Supported 00:22:29.791 00:22:29.791 Controller Memory Buffer Support 00:22:29.791 ================================ 00:22:29.791 Supported: No 00:22:29.791 00:22:29.791 Persistent Memory Region Support 00:22:29.791 ================================ 00:22:29.792 Supported: No 00:22:29.792 00:22:29.792 Admin Command Set Attributes 00:22:29.792 ============================ 00:22:29.792 Security Send/Receive: Not Supported 00:22:29.792 Format NVM: Not Supported 00:22:29.792 Firmware Activate/Download: Not Supported 00:22:29.792 Namespace Management: Not Supported 00:22:29.792 Device Self-Test: Not Supported 00:22:29.792 Directives: Not Supported 00:22:29.792 NVMe-MI: Not Supported 00:22:29.792 Virtualization Management: Not Supported 00:22:29.792 Doorbell Buffer Config: Not Supported 00:22:29.792 Get LBA Status Capability: Not Supported 00:22:29.792 Command & Feature Lockdown Capability: Not Supported 00:22:29.792 Abort Command Limit: 4 00:22:29.792 Async Event Request Limit: 4 00:22:29.792 Number of Firmware Slots: N/A 00:22:29.792 Firmware Slot 1 Read-Only: N/A 00:22:29.792 Firmware Activation Without Reset: N/A 00:22:29.792 Multiple Update Detection Support: N/A 00:22:29.792 Firmware Update Granularity: No Information Provided 00:22:29.792 Per-Namespace SMART Log: No 00:22:29.792 Asymmetric Namespace Access Log Page: Not Supported 00:22:29.792 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:29.792 Command Effects Log Page: Supported 00:22:29.792 Get Log Page Extended Data: Supported 00:22:29.792 Telemetry Log Pages: Not Supported 00:22:29.792 Persistent Event Log Pages: Not Supported 00:22:29.792 Supported Log Pages Log Page: May Support 00:22:29.792 Commands Supported & Effects Log Page: Not Supported 00:22:29.792 Feature Identifiers & Effects Log Page:May Support 00:22:29.792 NVMe-MI Commands & Effects Log Page: May Support 00:22:29.792 Data Area 4 for Telemetry Log: Not Supported 00:22:29.792 Error Log Page Entries Supported: 128 00:22:29.792 Keep Alive: Supported 00:22:29.792 Keep Alive Granularity: 10000 ms 00:22:29.792 00:22:29.792 NVM Command Set Attributes 00:22:29.792 ========================== 00:22:29.792 Submission Queue Entry Size 00:22:29.792 Max: 64 00:22:29.792 Min: 64 00:22:29.792 Completion Queue Entry Size 00:22:29.792 Max: 16 00:22:29.792 Min: 16 00:22:29.792 Number of Namespaces: 32 00:22:29.792 Compare Command: Supported 00:22:29.792 Write Uncorrectable Command: Not Supported 00:22:29.792 Dataset Management Command: Supported 00:22:29.792 Write Zeroes Command: Supported 00:22:29.792 Set Features Save Field: Not Supported 00:22:29.792 Reservations: Supported 00:22:29.792 Timestamp: Not Supported 00:22:29.792 Copy: Supported 00:22:29.792 Volatile Write Cache: Present 00:22:29.792 Atomic Write Unit (Normal): 1 00:22:29.792 Atomic Write Unit (PFail): 1 00:22:29.792 Atomic Compare & Write Unit: 1 00:22:29.792 Fused Compare & Write: Supported 00:22:29.792 Scatter-Gather List 00:22:29.792 SGL Command Set: Supported 00:22:29.792 SGL Keyed: Supported 00:22:29.792 SGL Bit Bucket Descriptor: Not Supported 00:22:29.792 SGL Metadata Pointer: Not Supported 00:22:29.792 Oversized SGL: Not Supported 00:22:29.792 SGL Metadata Address: Not Supported 00:22:29.792 SGL Offset: Supported 00:22:29.792 Transport SGL Data Block: Not Supported 00:22:29.792 Replay Protected Memory Block: Not Supported 00:22:29.792 00:22:29.792 Firmware Slot Information 00:22:29.792 ========================= 00:22:29.792 Active slot: 1 00:22:29.792 Slot 1 Firmware Revision: 24.09 00:22:29.792 00:22:29.792 00:22:29.792 Commands Supported and Effects 00:22:29.792 ============================== 00:22:29.792 Admin Commands 00:22:29.792 -------------- 00:22:29.792 Get Log Page (02h): Supported 00:22:29.792 Identify (06h): Supported 00:22:29.792 Abort (08h): Supported 00:22:29.792 Set Features (09h): Supported 00:22:29.792 Get Features (0Ah): Supported 00:22:29.792 Asynchronous Event Request (0Ch): Supported 00:22:29.792 Keep Alive (18h): Supported 00:22:29.792 I/O Commands 00:22:29.792 ------------ 00:22:29.792 Flush (00h): Supported LBA-Change 00:22:29.792 Write (01h): Supported LBA-Change 00:22:29.792 Read (02h): Supported 00:22:29.792 Compare (05h): Supported 00:22:29.792 Write Zeroes (08h): Supported LBA-Change 00:22:29.792 Dataset Management (09h): Supported LBA-Change 00:22:29.792 Copy (19h): Supported LBA-Change 00:22:29.792 00:22:29.792 Error Log 00:22:29.792 ========= 00:22:29.792 00:22:29.792 Arbitration 00:22:29.792 =========== 00:22:29.792 Arbitration Burst: 1 00:22:29.792 00:22:29.792 Power Management 00:22:29.792 ================ 00:22:29.792 Number of Power States: 1 00:22:29.792 Current Power State: Power State #0 00:22:29.792 Power State #0: 00:22:29.792 Max Power: 0.00 W 00:22:29.792 Non-Operational State: Operational 00:22:29.792 Entry Latency: Not Reported 00:22:29.792 Exit Latency: Not Reported 00:22:29.792 Relative Read Throughput: 0 00:22:29.792 Relative Read Latency: 0 00:22:29.792 Relative Write Throughput: 0 00:22:29.792 Relative Write Latency: 0 00:22:29.792 Idle Power: Not Reported 00:22:29.792 Active Power: Not Reported 00:22:29.792 Non-Operational Permissive Mode: Not Supported 00:22:29.792 00:22:29.792 Health Information 00:22:29.792 ================== 00:22:29.792 Critical Warnings: 00:22:29.792 Available Spare Space: OK 00:22:29.792 Temperature: OK 00:22:29.792 Device Reliability: OK 00:22:29.792 Read Only: No 00:22:29.792 Volatile Memory Backup: OK 00:22:29.792 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:29.792 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:29.792 Available Spare: 0% 00:22:29.792 Available Spare Threshold: 0% 00:22:29.792 Life Percentage Used:[2024-07-15 23:48:18.643831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.643836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1170ec0) 00:22:29.792 [2024-07-15 23:48:18.643843] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.792 [2024-07-15 23:48:18.643855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f48c0, cid 7, qid 0 00:22:29.792 [2024-07-15 23:48:18.643955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.792 [2024-07-15 23:48:18.643960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.792 [2024-07-15 23:48:18.643963] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.643966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f48c0) on tqpair=0x1170ec0 00:22:29.792 [2024-07-15 23:48:18.643993] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:29.792 [2024-07-15 23:48:18.644002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3e40) on tqpair=0x1170ec0 00:22:29.792 [2024-07-15 23:48:18.644007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.792 [2024-07-15 23:48:18.644012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f3fc0) on tqpair=0x1170ec0 00:22:29.792 [2024-07-15 23:48:18.644016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.792 [2024-07-15 23:48:18.644020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f4140) on tqpair=0x1170ec0 00:22:29.792 [2024-07-15 23:48:18.644024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.792 [2024-07-15 23:48:18.644028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f42c0) on tqpair=0x1170ec0 00:22:29.792 [2024-07-15 23:48:18.644032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:29.792 [2024-07-15 23:48:18.644039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644042] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1170ec0) 00:22:29.792 [2024-07-15 23:48:18.644051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.792 [2024-07-15 23:48:18.644062] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f42c0, cid 3, qid 0 00:22:29.792 [2024-07-15 23:48:18.644144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.792 [2024-07-15 23:48:18.644149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.792 [2024-07-15 23:48:18.644152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f42c0) on tqpair=0x1170ec0 00:22:29.792 [2024-07-15 23:48:18.644161] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644164] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644168] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1170ec0) 00:22:29.792 [2024-07-15 23:48:18.644173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.792 [2024-07-15 23:48:18.644187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f42c0, cid 3, qid 0 00:22:29.792 [2024-07-15 23:48:18.644291] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.792 [2024-07-15 23:48:18.644297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.792 [2024-07-15 23:48:18.644300] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644303] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f42c0) on tqpair=0x1170ec0 00:22:29.792 [2024-07-15 23:48:18.644309] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:29.792 [2024-07-15 23:48:18.644313] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:29.792 [2024-07-15 23:48:18.644321] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644325] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.792 [2024-07-15 23:48:18.644328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1170ec0) 00:22:29.792 [2024-07-15 23:48:18.644334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.792 [2024-07-15 23:48:18.644343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f42c0, cid 3, qid 0 00:22:29.792 [2024-07-15 23:48:18.644432] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.792 [2024-07-15 23:48:18.644438] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.793 [2024-07-15 23:48:18.644440] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.793 [2024-07-15 23:48:18.644444] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f42c0) on tqpair=0x1170ec0 00:22:29.793 [2024-07-15 23:48:18.644452] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.793 [2024-07-15 23:48:18.644455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.793 [2024-07-15 23:48:18.644458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1170ec0) 00:22:29.793 [2024-07-15 23:48:18.644464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.793 [2024-07-15 23:48:18.644474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f42c0, cid 3, qid 0 00:22:29.793 [2024-07-15 23:48:18.648235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.793 [2024-07-15 23:48:18.648245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.793 [2024-07-15 23:48:18.648248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.793 [2024-07-15 23:48:18.648252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f42c0) on tqpair=0x1170ec0 00:22:29.793 [2024-07-15 23:48:18.648262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:29.793 [2024-07-15 23:48:18.648266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:29.793 [2024-07-15 23:48:18.648269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1170ec0) 00:22:29.793 [2024-07-15 23:48:18.648275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:29.793 [2024-07-15 23:48:18.648287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11f42c0, cid 3, qid 0 00:22:29.793 [2024-07-15 23:48:18.648375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:29.793 [2024-07-15 23:48:18.648381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:29.793 [2024-07-15 23:48:18.648384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:29.793 [2024-07-15 23:48:18.648387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x11f42c0) on tqpair=0x1170ec0 00:22:29.793 [2024-07-15 23:48:18.648394] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:22:29.793 0% 00:22:29.793 Data Units Read: 0 00:22:29.793 Data Units Written: 0 00:22:29.793 Host Read Commands: 0 00:22:29.793 Host Write Commands: 0 00:22:29.793 Controller Busy Time: 0 minutes 00:22:29.793 Power Cycles: 0 00:22:29.793 Power On Hours: 0 hours 00:22:29.793 Unsafe Shutdowns: 0 00:22:29.793 Unrecoverable Media Errors: 0 00:22:29.793 Lifetime Error Log Entries: 0 00:22:29.793 Warning Temperature Time: 0 minutes 00:22:29.793 Critical Temperature Time: 0 minutes 00:22:29.793 00:22:29.793 Number of Queues 00:22:29.793 ================ 00:22:29.793 Number of I/O Submission Queues: 127 00:22:29.793 Number of I/O Completion Queues: 127 00:22:29.793 00:22:29.793 Active Namespaces 00:22:29.793 ================= 00:22:29.793 Namespace ID:1 00:22:29.793 Error Recovery Timeout: Unlimited 00:22:29.793 Command Set Identifier: NVM (00h) 00:22:29.793 Deallocate: Supported 00:22:29.793 Deallocated/Unwritten Error: Not Supported 00:22:29.793 Deallocated Read Value: Unknown 00:22:29.793 Deallocate in Write Zeroes: Not Supported 00:22:29.793 Deallocated Guard Field: 0xFFFF 00:22:29.793 Flush: Supported 00:22:29.793 Reservation: Supported 00:22:29.793 Namespace Sharing Capabilities: Multiple Controllers 00:22:29.793 Size (in LBAs): 131072 (0GiB) 00:22:29.793 Capacity (in LBAs): 131072 (0GiB) 00:22:29.793 Utilization (in LBAs): 131072 (0GiB) 00:22:29.793 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:29.793 EUI64: ABCDEF0123456789 00:22:29.793 UUID: bb520398-974f-4ad8-a181-8720c7af10f8 00:22:29.793 Thin Provisioning: Not Supported 00:22:29.793 Per-NS Atomic Units: Yes 00:22:29.793 Atomic Boundary Size (Normal): 0 00:22:29.793 Atomic Boundary Size (PFail): 0 00:22:29.793 Atomic Boundary Offset: 0 00:22:29.793 Maximum Single Source Range Length: 65535 00:22:29.793 Maximum Copy Length: 65535 00:22:29.793 Maximum Source Range Count: 1 00:22:29.793 NGUID/EUI64 Never Reused: No 00:22:29.793 Namespace Write Protected: No 00:22:29.793 Number of LBA Formats: 1 00:22:29.793 Current LBA Format: LBA Format #00 00:22:29.793 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:29.793 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.793 rmmod nvme_tcp 00:22:29.793 rmmod nvme_fabrics 00:22:29.793 rmmod nvme_keyring 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1086693 ']' 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1086693 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@942 -- # '[' -z 1086693 ']' 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # kill -0 1086693 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # uname 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:29.793 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1086693 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1086693' 00:22:30.052 killing process with pid 1086693 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@961 -- # kill 1086693 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # wait 1086693 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.052 23:48:18 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.588 23:48:21 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:32.588 00:22:32.588 real 0m9.196s 00:22:32.588 user 0m7.523s 00:22:32.588 sys 0m4.327s 00:22:32.588 23:48:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:32.588 23:48:21 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:32.588 ************************************ 00:22:32.588 END TEST nvmf_identify 00:22:32.588 ************************************ 00:22:32.588 23:48:21 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:32.588 23:48:21 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:32.588 23:48:21 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:32.588 23:48:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:32.588 23:48:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.588 ************************************ 00:22:32.588 START TEST nvmf_perf 00:22:32.588 ************************************ 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:32.588 * Looking for test storage... 00:22:32.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.588 23:48:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.858 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:37.859 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:37.859 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:37.859 Found net devices under 0000:86:00.0: cvl_0_0 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:37.859 Found net devices under 0000:86:00.1: cvl_0_1 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:37.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:22:37.859 00:22:37.859 --- 10.0.0.2 ping statistics --- 00:22:37.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.859 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:22:37.859 00:22:37.859 --- 10.0.0.1 ping statistics --- 00:22:37.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.859 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1090368 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1090368 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@823 -- # '[' -z 1090368 ']' 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:37.859 23:48:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:37.859 [2024-07-15 23:48:26.750829] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:37.859 [2024-07-15 23:48:26.750871] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.859 [2024-07-15 23:48:26.807100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.118 [2024-07-15 23:48:26.888109] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.118 [2024-07-15 23:48:26.888143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.118 [2024-07-15 23:48:26.888151] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.118 [2024-07-15 23:48:26.888157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.118 [2024-07-15 23:48:26.888162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.118 [2024-07-15 23:48:26.888203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.118 [2024-07-15 23:48:26.888288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.118 [2024-07-15 23:48:26.888338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.118 [2024-07-15 23:48:26.888339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # return 0 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:38.687 23:48:27 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:42.006 23:48:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:42.006 23:48:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:42.006 23:48:30 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:42.006 23:48:30 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:42.264 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:42.264 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:42.264 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:42.264 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:42.264 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:42.264 [2024-07-15 23:48:31.159702] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.264 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.522 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:42.522 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:42.779 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:42.780 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:43.037 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.037 [2024-07-15 23:48:31.908692] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.037 23:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:43.294 23:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:43.294 23:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:43.294 23:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:43.294 23:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:44.667 Initializing NVMe Controllers 00:22:44.667 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:44.667 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:44.667 Initialization complete. Launching workers. 00:22:44.667 ======================================================== 00:22:44.667 Latency(us) 00:22:44.667 Device Information : IOPS MiB/s Average min max 00:22:44.667 PCIE (0000:5e:00.0) NSID 1 from core 0: 97794.33 382.01 326.78 38.48 7195.83 00:22:44.667 ======================================================== 00:22:44.667 Total : 97794.33 382.01 326.78 38.48 7195.83 00:22:44.667 00:22:44.667 23:48:33 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:46.041 Initializing NVMe Controllers 00:22:46.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.042 Initialization complete. Launching workers. 00:22:46.042 ======================================================== 00:22:46.042 Latency(us) 00:22:46.042 Device Information : IOPS MiB/s Average min max 00:22:46.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.00 0.38 10283.55 166.78 45277.17 00:22:46.042 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16213.45 7965.51 47885.42 00:22:46.042 ======================================================== 00:22:46.042 Total : 161.00 0.63 12603.95 166.78 47885.42 00:22:46.042 00:22:46.042 23:48:34 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:46.975 Initializing NVMe Controllers 00:22:46.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.975 Initialization complete. Launching workers. 00:22:46.975 ======================================================== 00:22:46.975 Latency(us) 00:22:46.975 Device Information : IOPS MiB/s Average min max 00:22:46.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10611.71 41.45 3016.18 366.78 10012.57 00:22:46.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3713.40 14.51 8657.19 7066.64 27444.72 00:22:46.976 ======================================================== 00:22:46.976 Total : 14325.10 55.96 4478.46 366.78 27444.72 00:22:46.976 00:22:46.976 23:48:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:46.976 23:48:35 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:46.976 23:48:35 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:49.510 Initializing NVMe Controllers 00:22:49.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.510 Controller IO queue size 128, less than required. 00:22:49.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.510 Controller IO queue size 128, less than required. 00:22:49.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:49.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:49.510 Initialization complete. Launching workers. 00:22:49.510 ======================================================== 00:22:49.510 Latency(us) 00:22:49.510 Device Information : IOPS MiB/s Average min max 00:22:49.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1311.03 327.76 100172.29 52435.76 158583.16 00:22:49.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 591.34 147.83 223864.98 61648.46 363431.80 00:22:49.510 ======================================================== 00:22:49.510 Total : 1902.36 475.59 138621.26 52435.76 363431.80 00:22:49.510 00:22:49.511 23:48:38 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:49.511 No valid NVMe controllers or AIO or URING devices found 00:22:49.511 Initializing NVMe Controllers 00:22:49.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:49.511 Controller IO queue size 128, less than required. 00:22:49.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:49.511 Controller IO queue size 128, less than required. 00:22:49.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:49.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:49.511 WARNING: Some requested NVMe devices were skipped 00:22:49.511 23:48:38 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:52.805 Initializing NVMe Controllers 00:22:52.805 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.805 Controller IO queue size 128, less than required. 00:22:52.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.805 Controller IO queue size 128, less than required. 00:22:52.805 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:52.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:52.805 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:52.805 Initialization complete. Launching workers. 00:22:52.805 00:22:52.805 ==================== 00:22:52.805 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:52.805 TCP transport: 00:22:52.805 polls: 45331 00:22:52.805 idle_polls: 17186 00:22:52.805 sock_completions: 28145 00:22:52.805 nvme_completions: 4363 00:22:52.805 submitted_requests: 6514 00:22:52.805 queued_requests: 1 00:22:52.805 00:22:52.805 ==================== 00:22:52.805 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:52.805 TCP transport: 00:22:52.805 polls: 45313 00:22:52.805 idle_polls: 17249 00:22:52.805 sock_completions: 28064 00:22:52.805 nvme_completions: 4355 00:22:52.805 submitted_requests: 6456 00:22:52.805 queued_requests: 1 00:22:52.805 ======================================================== 00:22:52.805 Latency(us) 00:22:52.805 Device Information : IOPS MiB/s Average min max 00:22:52.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1090.49 272.62 121748.67 65149.50 191665.01 00:22:52.805 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1088.49 272.12 120666.98 48963.57 166302.13 00:22:52.805 ======================================================== 00:22:52.805 Total : 2178.98 544.75 121208.32 48963.57 191665.01 00:22:52.805 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.805 rmmod nvme_tcp 00:22:52.805 rmmod nvme_fabrics 00:22:52.805 rmmod nvme_keyring 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1090368 ']' 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1090368 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@942 -- # '[' -z 1090368 ']' 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # kill -0 1090368 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # uname 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1090368 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1090368' 00:22:52.805 killing process with pid 1090368 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@961 -- # kill 1090368 00:22:52.805 23:48:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # wait 1090368 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.184 23:48:42 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.091 23:48:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.091 00:22:56.091 real 0m23.929s 00:22:56.091 user 1m4.596s 00:22:56.091 sys 0m7.020s 00:22:56.091 23:48:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:56.091 23:48:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:56.091 ************************************ 00:22:56.091 END TEST nvmf_perf 00:22:56.091 ************************************ 00:22:56.350 23:48:45 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:22:56.351 23:48:45 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:56.351 23:48:45 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:56.351 23:48:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:56.351 23:48:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:56.351 ************************************ 00:22:56.351 START TEST nvmf_fio_host 00:22:56.351 ************************************ 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:56.351 * Looking for test storage... 00:22:56.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.351 23:48:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.626 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.626 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.626 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.626 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.626 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.626 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.626 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:01.627 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:01.627 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:01.627 Found net devices under 0000:86:00.0: cvl_0_0 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:01.627 Found net devices under 0000:86:00.1: cvl_0_1 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:23:01.627 00:23:01.627 --- 10.0.0.2 ping statistics --- 00:23:01.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.627 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:23:01.627 00:23:01.627 --- 10.0.0.1 ping statistics --- 00:23:01.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.627 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1096301 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1096301 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@823 -- # '[' -z 1096301 ']' 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:01.627 23:48:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.627 [2024-07-15 23:48:49.982691] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:01.627 [2024-07-15 23:48:49.982733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.627 [2024-07-15 23:48:50.040181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.627 [2024-07-15 23:48:50.129697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.627 [2024-07-15 23:48:50.129732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.627 [2024-07-15 23:48:50.129742] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.627 [2024-07-15 23:48:50.129748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.627 [2024-07-15 23:48:50.129753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.628 [2024-07-15 23:48:50.129802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.628 [2024-07-15 23:48:50.129901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.628 [2024-07-15 23:48:50.129920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.628 [2024-07-15 23:48:50.129921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.886 23:48:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:01.886 23:48:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # return 0 00:23:01.886 23:48:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:02.154 [2024-07-15 23:48:50.951660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.154 23:48:50 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:02.154 23:48:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.154 23:48:50 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.154 23:48:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:02.475 Malloc1 00:23:02.475 23:48:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:02.475 23:48:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:02.734 23:48:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.994 [2024-07-15 23:48:51.753876] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:23:02.994 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:23:03.274 23:48:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:23:03.274 23:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:23:03.274 23:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:23:03.274 23:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:03.274 23:48:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:03.531 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:03.531 fio-3.35 00:23:03.531 Starting 1 thread 00:23:06.058 00:23:06.058 test: (groupid=0, jobs=1): err= 0: pid=1096845: Mon Jul 15 23:48:54 2024 00:23:06.058 read: IOPS=11.7k, BW=45.8MiB/s (48.0MB/s)(91.8MiB/2005msec) 00:23:06.058 slat (nsec): min=1599, max=246130, avg=1762.66, stdev=2338.13 00:23:06.058 clat (usec): min=3753, max=10677, avg=6062.85, stdev=468.89 00:23:06.058 lat (usec): min=3788, max=10679, avg=6064.61, stdev=468.90 00:23:06.058 clat percentiles (usec): 00:23:06.058 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5473], 20.00th=[ 5735], 00:23:06.058 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:23:06.058 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6587], 95.00th=[ 6783], 00:23:06.058 | 99.00th=[ 7111], 99.50th=[ 7439], 99.90th=[ 8717], 99.95th=[10028], 00:23:06.058 | 99.99th=[10683] 00:23:06.058 bw ( KiB/s): min=46016, max=47448, per=99.99%, avg=46868.00, stdev=624.50, samples=4 00:23:06.058 iops : min=11504, max=11862, avg=11717.00, stdev=156.12, samples=4 00:23:06.058 write: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2005msec); 0 zone resets 00:23:06.058 slat (nsec): min=1645, max=239828, avg=1856.46, stdev=1742.74 00:23:06.058 clat (usec): min=2495, max=9696, avg=4866.74, stdev=395.73 00:23:06.058 lat (usec): min=2511, max=9698, avg=4868.60, stdev=395.77 00:23:06.058 clat percentiles (usec): 00:23:06.058 | 1.00th=[ 3949], 5.00th=[ 4293], 10.00th=[ 4424], 20.00th=[ 4555], 00:23:06.058 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:23:06.058 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:23:06.058 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 8094], 99.95th=[ 8717], 00:23:06.058 | 99.99th=[ 9634] 00:23:06.058 bw ( KiB/s): min=46080, max=47040, per=99.96%, avg=46548.00, stdev=392.22, samples=4 00:23:06.058 iops : min=11520, max=11760, avg=11637.00, stdev=98.05, samples=4 00:23:06.058 lat (msec) : 4=0.64%, 10=99.34%, 20=0.02% 00:23:06.058 cpu : usr=67.17%, sys=28.64%, ctx=78, majf=0, minf=6 00:23:06.058 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:06.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:06.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:06.058 issued rwts: total=23494,23341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:06.058 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:06.058 00:23:06.058 Run status group 0 (all jobs): 00:23:06.058 READ: bw=45.8MiB/s (48.0MB/s), 45.8MiB/s-45.8MiB/s (48.0MB/s-48.0MB/s), io=91.8MiB (96.2MB), run=2005-2005msec 00:23:06.058 WRITE: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.6MB), run=2005-2005msec 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:06.058 23:48:54 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:06.058 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:06.058 fio-3.35 00:23:06.058 Starting 1 thread 00:23:08.584 00:23:08.584 test: (groupid=0, jobs=1): err= 0: pid=1097415: Mon Jul 15 23:48:57 2024 00:23:08.584 read: IOPS=10.2k, BW=159MiB/s (167MB/s)(319MiB/2006msec) 00:23:08.584 slat (nsec): min=2590, max=87580, avg=2867.99, stdev=1259.10 00:23:08.584 clat (usec): min=2908, max=51478, avg=7558.66, stdev=3610.21 00:23:08.584 lat (usec): min=2910, max=51480, avg=7561.53, stdev=3610.26 00:23:08.584 clat percentiles (usec): 00:23:08.584 | 1.00th=[ 3851], 5.00th=[ 4490], 10.00th=[ 4948], 20.00th=[ 5604], 00:23:08.584 | 30.00th=[ 6194], 40.00th=[ 6849], 50.00th=[ 7373], 60.00th=[ 7898], 00:23:08.584 | 70.00th=[ 8356], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[10290], 00:23:08.584 | 99.00th=[12780], 99.50th=[44303], 99.90th=[50594], 99.95th=[51119], 00:23:08.584 | 99.99th=[51643] 00:23:08.584 bw ( KiB/s): min=75552, max=94080, per=50.31%, avg=81920.00, stdev=8533.51, samples=4 00:23:08.584 iops : min= 4722, max= 5880, avg=5120.00, stdev=533.34, samples=4 00:23:08.584 write: IOPS=6101, BW=95.3MiB/s (100.0MB/s)(168MiB/1757msec); 0 zone resets 00:23:08.584 slat (usec): min=30, max=240, avg=32.32, stdev= 5.98 00:23:08.584 clat (usec): min=4571, max=14418, avg=8713.73, stdev=1505.77 00:23:08.584 lat (usec): min=4604, max=14449, avg=8746.05, stdev=1506.56 00:23:08.584 clat percentiles (usec): 00:23:08.584 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7439], 00:23:08.584 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:08.584 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11469], 00:23:08.584 | 99.00th=[12518], 99.50th=[12780], 99.90th=[14222], 99.95th=[14353], 00:23:08.584 | 99.99th=[14353] 00:23:08.584 bw ( KiB/s): min=78528, max=97536, per=87.33%, avg=85256.00, stdev=8789.24, samples=4 00:23:08.584 iops : min= 4908, max= 6096, avg=5328.50, stdev=549.33, samples=4 00:23:08.584 lat (msec) : 4=1.04%, 10=88.24%, 20=10.30%, 50=0.32%, 100=0.08% 00:23:08.584 cpu : usr=86.03%, sys=12.17%, ctx=65, majf=0, minf=3 00:23:08.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:08.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:08.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:08.584 issued rwts: total=20414,10721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:08.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:08.584 00:23:08.584 Run status group 0 (all jobs): 00:23:08.584 READ: bw=159MiB/s (167MB/s), 159MiB/s-159MiB/s (167MB/s-167MB/s), io=319MiB (334MB), run=2006-2006msec 00:23:08.584 WRITE: bw=95.3MiB/s (100.0MB/s), 95.3MiB/s-95.3MiB/s (100.0MB/s-100.0MB/s), io=168MiB (176MB), run=1757-1757msec 00:23:08.584 23:48:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.584 23:48:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:08.584 23:48:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:08.584 23:48:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:08.584 23:48:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:08.584 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.584 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.843 rmmod nvme_tcp 00:23:08.843 rmmod nvme_fabrics 00:23:08.843 rmmod nvme_keyring 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1096301 ']' 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1096301 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@942 -- # '[' -z 1096301 ']' 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # kill -0 1096301 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # uname 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1096301 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1096301' 00:23:08.843 killing process with pid 1096301 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@961 -- # kill 1096301 00:23:08.843 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # wait 1096301 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.102 23:48:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.007 23:48:59 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.007 00:23:11.007 real 0m14.831s 00:23:11.007 user 0m47.182s 00:23:11.007 sys 0m5.642s 00:23:11.007 23:48:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:11.007 23:48:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.007 ************************************ 00:23:11.007 END TEST nvmf_fio_host 00:23:11.007 ************************************ 00:23:11.007 23:48:59 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:11.007 23:48:59 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:11.007 23:48:59 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:11.007 23:48:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:11.007 23:48:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:11.266 ************************************ 00:23:11.266 START TEST nvmf_failover 00:23:11.266 ************************************ 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:11.266 * Looking for test storage... 00:23:11.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.266 23:49:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:16.530 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:16.530 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:16.530 Found net devices under 0000:86:00.0: cvl_0_0 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:16.530 Found net devices under 0000:86:00.1: cvl_0_1 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:16.530 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:16.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:16.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:23:16.531 00:23:16.531 --- 10.0.0.2 ping statistics --- 00:23:16.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.531 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:16.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:16.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:23:16.531 00:23:16.531 --- 10.0.0.1 ping statistics --- 00:23:16.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:16.531 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1101151 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1101151 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1101151 ']' 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:16.531 23:49:04 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:16.531 [2024-07-15 23:49:04.988960] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:16.531 [2024-07-15 23:49:04.989007] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.531 [2024-07-15 23:49:05.045107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:16.531 [2024-07-15 23:49:05.124062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.531 [2024-07-15 23:49:05.124096] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.531 [2024-07-15 23:49:05.124103] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.531 [2024-07-15 23:49:05.124110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.531 [2024-07-15 23:49:05.124115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.531 [2024-07-15 23:49:05.124150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.531 [2024-07-15 23:49:05.124244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:16.531 [2024-07-15 23:49:05.124246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.097 23:49:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:17.097 23:49:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:23:17.097 23:49:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.097 23:49:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.097 23:49:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.097 23:49:05 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.097 23:49:05 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:17.097 [2024-07-15 23:49:05.996472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.097 23:49:06 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:17.355 Malloc0 00:23:17.355 23:49:06 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:17.612 23:49:06 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:17.870 23:49:06 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.870 [2024-07-15 23:49:06.732582] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.870 23:49:06 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:18.128 [2024-07-15 23:49:06.905089] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.128 23:49:06 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:18.128 [2024-07-15 23:49:07.097736] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1101518 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1101518 /var/tmp/bdevperf.sock 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1101518 ']' 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:18.386 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:19.319 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:19.319 23:49:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:23:19.319 23:49:07 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.319 NVMe0n1 00:23:19.319 23:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.576 00:23:19.576 23:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1101739 00:23:19.576 23:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:19.576 23:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:20.952 23:49:09 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:20.952 [2024-07-15 23:49:09.677097] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677149] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677157] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677164] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677171] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677178] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677183] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677189] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677195] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677201] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677207] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677212] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677218] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677229] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677236] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677241] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677247] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.952 [2024-07-15 23:49:09.677254] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677260] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677266] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677272] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677278] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677284] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677290] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677300] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677307] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677313] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677319] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677325] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677331] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677336] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677342] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677347] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677354] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677361] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677367] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677373] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677379] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677385] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677391] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677397] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677403] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677409] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677415] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677421] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677427] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677433] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677439] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677445] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677451] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677456] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677464] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677470] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677477] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677483] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677489] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677494] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677500] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677506] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677512] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677518] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677524] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677531] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677537] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677543] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677550] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677557] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677563] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677570] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677576] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677581] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677588] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677593] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677600] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677606] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677612] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677618] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677624] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677631] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677638] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677643] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677649] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677655] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677661] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677667] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677673] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677679] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677685] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677708] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677714] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677720] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677737] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677754] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677760] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677766] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677774] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677781] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677787] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677794] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677800] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.953 [2024-07-15 23:49:09.677807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.954 [2024-07-15 23:49:09.677813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.954 [2024-07-15 23:49:09.677819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.954 [2024-07-15 23:49:09.677825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b91080 is same with the state(5) to be set 00:23:20.954 23:49:09 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:24.277 23:49:12 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.277 00:23:24.277 23:49:13 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:24.535 [2024-07-15 23:49:13.271633] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271670] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271677] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271683] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271690] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271696] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271702] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271707] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271713] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271719] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271725] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271731] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271736] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271742] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271748] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271753] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271759] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271765] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271777] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271783] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271789] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271795] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271801] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271807] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271813] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271819] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271825] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271830] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271836] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271842] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271848] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271855] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271861] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271867] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271873] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271880] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271885] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271891] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271896] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271905] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271911] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271917] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271924] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271930] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271935] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271946] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271952] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 [2024-07-15 23:49:13.271958] tcp.c:1621:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b92460 is same with the state(5) to be set 00:23:24.535 23:49:13 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:27.810 23:49:16 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:27.810 [2024-07-15 23:49:16.470110] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.810 23:49:16 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:28.744 23:49:17 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:28.744 23:49:17 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1101739 00:23:35.311 0 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1101518 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1101518 ']' 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1101518 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1101518 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1101518' 00:23:35.311 killing process with pid 1101518 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1101518 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1101518 00:23:35.311 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:35.311 [2024-07-15 23:49:07.165096] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:35.311 [2024-07-15 23:49:07.165150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101518 ] 00:23:35.311 [2024-07-15 23:49:07.220962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.311 [2024-07-15 23:49:07.296887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.311 Running I/O for 15 seconds... 00:23:35.311 [2024-07-15 23:49:09.679312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.311 [2024-07-15 23:49:09.679351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.311 [2024-07-15 23:49:09.679367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.311 [2024-07-15 23:49:09.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.311 [2024-07-15 23:49:09.679384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.311 [2024-07-15 23:49:09.679392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.311 [2024-07-15 23:49:09.679401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.312 [2024-07-15 23:49:09.679977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.312 [2024-07-15 23:49:09.679983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.679991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.313 [2024-07-15 23:49:09.679999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.313 [2024-07-15 23:49:09.680015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.313 [2024-07-15 23:49:09.680029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.313 [2024-07-15 23:49:09.680043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.313 [2024-07-15 23:49:09.680058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.313 [2024-07-15 23:49:09.680073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.313 [2024-07-15 23:49:09.680087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.313 [2024-07-15 23:49:09.680515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.313 [2024-07-15 23:49:09.680521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.680989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.680995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.681003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.681010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.681020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.681026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.681034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.314 [2024-07-15 23:49:09.681040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.681062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.314 [2024-07-15 23:49:09.681071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97392 len:8 PRP1 0x0 PRP2 0x0 00:23:35.314 [2024-07-15 23:49:09.681079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.314 [2024-07-15 23:49:09.681088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97400 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97408 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97416 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97424 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97432 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97440 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97448 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97456 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97464 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97472 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97480 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97488 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.681392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.681396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.681402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97496 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.681408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.693577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.693589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.693598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97504 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.693609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.693619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.315 [2024-07-15 23:49:09.693625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.315 [2024-07-15 23:49:09.693634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97512 len:8 PRP1 0x0 PRP2 0x0 00:23:35.315 [2024-07-15 23:49:09.693645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.693691] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6a5300 was disconnected and freed. reset controller. 00:23:35.315 [2024-07-15 23:49:09.693704] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:35.315 [2024-07-15 23:49:09.693729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:09.693740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.693750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:09.693759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.693770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:09.693779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.693789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:09.693798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:09.693814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.315 [2024-07-15 23:49:09.693846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x687540 (9): Bad file descriptor 00:23:35.315 [2024-07-15 23:49:09.697726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.315 [2024-07-15 23:49:09.772468] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:35.315 [2024-07-15 23:49:13.272809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:13.272843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:13.272853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:13.272860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:13.272868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:13.272875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:13.272881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.315 [2024-07-15 23:49:13.272888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:13.272895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x687540 is same with the state(5) to be set 00:23:35.315 [2024-07-15 23:49:13.272934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.315 [2024-07-15 23:49:13.272942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.315 [2024-07-15 23:49:13.272959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.315 [2024-07-15 23:49:13.272966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.272975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.272982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.272991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.272998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.316 [2024-07-15 23:49:13.273315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.316 [2024-07-15 23:49:13.273438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.316 [2024-07-15 23:49:13.273446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.317 [2024-07-15 23:49:13.273963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.317 [2024-07-15 23:49:13.273970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.273976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.273984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.273991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.273999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.318 [2024-07-15 23:49:13.274490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.318 [2024-07-15 23:49:13.274499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:13.274736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.319 [2024-07-15 23:49:13.274750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.319 [2024-07-15 23:49:13.274764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.319 [2024-07-15 23:49:13.274781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.319 [2024-07-15 23:49:13.274799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.319 [2024-07-15 23:49:13.274815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.319 [2024-07-15 23:49:13.274829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.319 [2024-07-15 23:49:13.274858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.319 [2024-07-15 23:49:13.274864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41480 len:8 PRP1 0x0 PRP2 0x0 00:23:35.319 [2024-07-15 23:49:13.274871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:13.274914] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x852380 was disconnected and freed. reset controller. 00:23:35.319 [2024-07-15 23:49:13.274923] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:35.319 [2024-07-15 23:49:13.274932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.319 [2024-07-15 23:49:13.277759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.319 [2024-07-15 23:49:13.277790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x687540 (9): Bad file descriptor 00:23:35.319 [2024-07-15 23:49:13.427332] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:35.319 [2024-07-15 23:49:17.661567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.319 [2024-07-15 23:49:17.661802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.319 [2024-07-15 23:49:17.661808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.661988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.661996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.320 [2024-07-15 23:49:17.662109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.320 [2024-07-15 23:49:17.662416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.320 [2024-07-15 23:49:17.662423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.321 [2024-07-15 23:49:17.662489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.321 [2024-07-15 23:49:17.662897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.321 [2024-07-15 23:49:17.662905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.662914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.662922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.662929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.662937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.662943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.662951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.662957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.662966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.662973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.662981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.662988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.662996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.322 [2024-07-15 23:49:17.663356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.322 [2024-07-15 23:49:17.663370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.322 [2024-07-15 23:49:17.663384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.322 [2024-07-15 23:49:17.663399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.322 [2024-07-15 23:49:17.663414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.322 [2024-07-15 23:49:17.663429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:35.322 [2024-07-15 23:49:17.663443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.322 [2024-07-15 23:49:17.663505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.322 [2024-07-15 23:49:17.663513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.323 [2024-07-15 23:49:17.663520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.323 [2024-07-15 23:49:17.663534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:35.323 [2024-07-15 23:49:17.663548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:35.323 [2024-07-15 23:49:17.663575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:35.323 [2024-07-15 23:49:17.663580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:23:35.323 [2024-07-15 23:49:17.663587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663631] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x852170 was disconnected and freed. reset controller. 00:23:35.323 [2024-07-15 23:49:17.663640] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:35.323 [2024-07-15 23:49:17.663659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.323 [2024-07-15 23:49:17.663666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.323 [2024-07-15 23:49:17.663680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.323 [2024-07-15 23:49:17.663693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:35.323 [2024-07-15 23:49:17.663706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:35.323 [2024-07-15 23:49:17.663713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:35.323 [2024-07-15 23:49:17.666558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:35.323 [2024-07-15 23:49:17.666587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x687540 (9): Bad file descriptor 00:23:35.323 [2024-07-15 23:49:17.698924] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:35.323 00:23:35.323 Latency(us) 00:23:35.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.323 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.323 Verification LBA range: start 0x0 length 0x4000 00:23:35.323 NVMe0n1 : 15.01 10942.90 42.75 772.54 0.00 10903.52 637.55 21883.33 00:23:35.323 =================================================================================================================== 00:23:35.323 Total : 10942.90 42.75 772.54 0.00 10903.52 637.55 21883.33 00:23:35.323 Received shutdown signal, test time was about 15.000000 seconds 00:23:35.323 00:23:35.323 Latency(us) 00:23:35.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.323 =================================================================================================================== 00:23:35.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1104174 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1104174 /var/tmp/bdevperf.sock 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1104174 ']' 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:35.323 23:49:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:35.889 23:49:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:35.889 23:49:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:23:35.889 23:49:24 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:36.147 [2024-07-15 23:49:24.919282] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.147 23:49:24 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:36.147 [2024-07-15 23:49:25.095757] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:36.405 23:49:25 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:36.663 NVMe0n1 00:23:36.663 23:49:25 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:36.920 00:23:36.920 23:49:25 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.178 00:23:37.178 23:49:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:37.178 23:49:26 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.436 23:49:26 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.693 23:49:26 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:40.977 23:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.977 23:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:40.977 23:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:40.977 23:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1105100 00:23:40.977 23:49:29 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1105100 00:23:41.913 0 00:23:41.913 23:49:30 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:41.913 [2024-07-15 23:49:23.951250] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:41.913 [2024-07-15 23:49:23.951302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1104174 ] 00:23:41.913 [2024-07-15 23:49:24.006611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.913 [2024-07-15 23:49:24.075985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.913 [2024-07-15 23:49:26.446782] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:41.913 [2024-07-15 23:49:26.446830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.913 [2024-07-15 23:49:26.446840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.913 [2024-07-15 23:49:26.446849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.913 [2024-07-15 23:49:26.446856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.913 [2024-07-15 23:49:26.446863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.913 [2024-07-15 23:49:26.446870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.913 [2024-07-15 23:49:26.446877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:41.913 [2024-07-15 23:49:26.446883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:41.913 [2024-07-15 23:49:26.446890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:41.913 [2024-07-15 23:49:26.446913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.913 [2024-07-15 23:49:26.446926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a9e540 (9): Bad file descriptor 00:23:41.913 [2024-07-15 23:49:26.458975] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:41.913 Running I/O for 1 seconds... 00:23:41.913 00:23:41.913 Latency(us) 00:23:41.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:41.913 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:41.913 Verification LBA range: start 0x0 length 0x4000 00:23:41.913 NVMe0n1 : 1.00 10784.80 42.13 0.00 0.00 11824.61 975.92 11568.53 00:23:41.913 =================================================================================================================== 00:23:41.913 Total : 10784.80 42.13 0.00 0.00 11824.61 975.92 11568.53 00:23:41.913 23:49:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:41.913 23:49:30 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:42.171 23:49:30 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.171 23:49:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.171 23:49:31 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:42.429 23:49:31 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:42.686 23:49:31 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1104174 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1104174 ']' 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1104174 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1104174 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1104174' 00:23:45.979 killing process with pid 1104174 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1104174 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1104174 00:23:45.979 23:49:34 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:46.275 23:49:34 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.275 rmmod nvme_tcp 00:23:46.275 rmmod nvme_fabrics 00:23:46.275 rmmod nvme_keyring 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1101151 ']' 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1101151 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1101151 ']' 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1101151 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1101151 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1101151' 00:23:46.275 killing process with pid 1101151 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1101151 00:23:46.275 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1101151 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.534 23:49:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.087 23:49:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.087 00:23:49.087 real 0m37.502s 00:23:49.087 user 2m2.586s 00:23:49.087 sys 0m7.030s 00:23:49.087 23:49:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:49.087 23:49:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:49.087 ************************************ 00:23:49.087 END TEST nvmf_failover 00:23:49.087 ************************************ 00:23:49.087 23:49:37 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:23:49.087 23:49:37 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:49.087 23:49:37 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:49.087 23:49:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:49.087 23:49:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:49.087 ************************************ 00:23:49.087 START TEST nvmf_host_discovery 00:23:49.087 ************************************ 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:49.087 * Looking for test storage... 00:23:49.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.087 23:49:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:54.351 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:54.351 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:54.351 Found net devices under 0000:86:00.0: cvl_0_0 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:54.351 Found net devices under 0000:86:00.1: cvl_0_1 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:54.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:54.351 00:23:54.351 --- 10.0.0.2 ping statistics --- 00:23:54.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.351 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:54.351 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:23:54.352 00:23:54.352 --- 10.0.0.1 ping statistics --- 00:23:54.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.352 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1109395 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1109395 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 1109395 ']' 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.352 23:49:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:54.352 [2024-07-15 23:49:42.633959] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:54.352 [2024-07-15 23:49:42.634004] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.352 [2024-07-15 23:49:42.693240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.352 [2024-07-15 23:49:42.772047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.352 [2024-07-15 23:49:42.772081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.352 [2024-07-15 23:49:42.772088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.352 [2024-07-15 23:49:42.772094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.352 [2024-07-15 23:49:42.772099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.352 [2024-07-15 23:49:42.772122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.610 [2024-07-15 23:49:43.458517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.610 [2024-07-15 23:49:43.466636] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.610 null0 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.610 null1 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:54.610 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1109558 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1109558 /tmp/host.sock 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@823 -- # '[' -z 1109558 ']' 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:54.611 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:54.611 23:49:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.611 [2024-07-15 23:49:43.527922] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:54.611 [2024-07-15 23:49:43.527964] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109558 ] 00:23:54.611 [2024-07-15 23:49:43.582994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.868 [2024-07-15 23:49:43.662320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # return 0 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:55.434 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.693 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.694 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.694 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.952 [2024-07-15 23:49:44.677842] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:55.952 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == \n\v\m\e\0 ]] 00:23:55.953 23:49:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:23:56.519 [2024-07-15 23:49:45.356341] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:56.519 [2024-07-15 23:49:45.356362] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:56.519 [2024-07-15 23:49:45.356374] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.519 [2024-07-15 23:49:45.442643] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:56.778 [2024-07-15 23:49:45.539393] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:56.778 [2024-07-15 23:49:45.539414] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:57.038 23:49:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0 ]] 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.038 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.297 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.298 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.555 [2024-07-15 23:49:46.314291] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:57.555 [2024-07-15 23:49:46.315202] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:57.555 [2024-07-15 23:49:46.315228] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.555 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.556 [2024-07-15 23:49:46.403806] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:57.556 23:49:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # sleep 1 00:23:57.813 [2024-07-15 23:49:46.582705] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:57.813 [2024-07-15 23:49:46.582723] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:57.813 [2024-07-15 23:49:46.582727] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.746 [2024-07-15 23:49:47.578619] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:58.746 [2024-07-15 23:49:47.578641] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:58.746 [2024-07-15 23:49:47.580400] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.746 [2024-07-15 23:49:47.580415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.746 [2024-07-15 23:49:47.580424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.746 [2024-07-15 23:49:47.580430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.746 [2024-07-15 23:49:47.580438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.746 [2024-07-15 23:49:47.580444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.746 [2024-07-15 23:49:47.580451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:58.746 [2024-07-15 23:49:47.580457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:58.746 [2024-07-15 23:49:47.580464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:58.746 [2024-07-15 23:49:47.590416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.746 [2024-07-15 23:49:47.600452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.746 [2024-07-15 23:49:47.600740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.746 [2024-07-15 23:49:47.600756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1f10 with addr=10.0.0.2, port=4420 00:23:58.746 [2024-07-15 23:49:47.600764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.746 [2024-07-15 23:49:47.600775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.746 [2024-07-15 23:49:47.600793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:58.746 [2024-07-15 23:49:47.600801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:58.746 [2024-07-15 23:49:47.600808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:58.746 [2024-07-15 23:49:47.600818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.746 [2024-07-15 23:49:47.610510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.746 [2024-07-15 23:49:47.610637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.746 [2024-07-15 23:49:47.610649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1f10 with addr=10.0.0.2, port=4420 00:23:58.746 [2024-07-15 23:49:47.610655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.746 [2024-07-15 23:49:47.610665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.746 [2024-07-15 23:49:47.610674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:58.746 [2024-07-15 23:49:47.610680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:58.746 [2024-07-15 23:49:47.610686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:58.746 [2024-07-15 23:49:47.610696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.746 [2024-07-15 23:49:47.620559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.746 [2024-07-15 23:49:47.620871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.746 [2024-07-15 23:49:47.620884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1f10 with addr=10.0.0.2, port=4420 00:23:58.746 [2024-07-15 23:49:47.620893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.746 [2024-07-15 23:49:47.620904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.746 [2024-07-15 23:49:47.620939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:58.746 [2024-07-15 23:49:47.620947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:58.746 [2024-07-15 23:49:47.620954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:58.746 [2024-07-15 23:49:47.620963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.746 [2024-07-15 23:49:47.630608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.746 [2024-07-15 23:49:47.630921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.746 [2024-07-15 23:49:47.630935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1f10 with addr=10.0.0.2, port=4420 00:23:58.746 [2024-07-15 23:49:47.630944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.746 [2024-07-15 23:49:47.630955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.746 [2024-07-15 23:49:47.630972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:58.746 [2024-07-15 23:49:47.630979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:58.746 [2024-07-15 23:49:47.630985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:58.746 [2024-07-15 23:49:47.630995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:58.746 [2024-07-15 23:49:47.640664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.746 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.747 [2024-07-15 23:49:47.641706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.747 [2024-07-15 23:49:47.641728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1f10 with addr=10.0.0.2, port=4420 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.747 [2024-07-15 23:49:47.641737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.747 [2024-07-15 23:49:47.641760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.747 [2024-07-15 23:49:47.641781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:58.747 [2024-07-15 23:49:47.641789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:58.747 [2024-07-15 23:49:47.641796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:58.747 [2024-07-15 23:49:47.641807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.747 [2024-07-15 23:49:47.650717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.747 [2024-07-15 23:49:47.650956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.747 [2024-07-15 23:49:47.650971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1f10 with addr=10.0.0.2, port=4420 00:23:58.747 [2024-07-15 23:49:47.650981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.747 [2024-07-15 23:49:47.650993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.747 [2024-07-15 23:49:47.651002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:58.747 [2024-07-15 23:49:47.651009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:58.747 [2024-07-15 23:49:47.651015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:58.747 [2024-07-15 23:49:47.651025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.747 [2024-07-15 23:49:47.660772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:58.747 [2024-07-15 23:49:47.661066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.747 [2024-07-15 23:49:47.661079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6d1f10 with addr=10.0.0.2, port=4420 00:23:58.747 [2024-07-15 23:49:47.661086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6d1f10 is same with the state(5) to be set 00:23:58.747 [2024-07-15 23:49:47.661096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6d1f10 (9): Bad file descriptor 00:23:58.747 [2024-07-15 23:49:47.661106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:58.747 [2024-07-15 23:49:47.661112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:58.747 [2024-07-15 23:49:47.661119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:58.747 [2024-07-15 23:49:47.661128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.747 [2024-07-15 23:49:47.665305] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:58.747 [2024-07-15 23:49:47.665319] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_paths nvme0 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:58.747 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ 4421 == \4\4\2\1 ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_subsystem_names 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_bdev_list 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # [[ '' == '' ]] 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@906 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@907 -- # local max=10 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@908 -- # (( max-- )) 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # get_notification_count 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.005 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@909 -- # (( notification_count == expected_count )) 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # return 0 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:59.006 23:49:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 [2024-07-15 23:49:48.986326] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:00.376 [2024-07-15 23:49:48.986342] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:00.376 [2024-07-15 23:49:48.986355] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:00.376 [2024-07-15 23:49:49.072620] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:00.376 [2024-07-15 23:49:49.182056] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:00.376 [2024-07-15 23:49:49.182082] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 request: 00:24:00.376 { 00:24:00.376 "name": "nvme", 00:24:00.376 "trtype": "tcp", 00:24:00.376 "traddr": "10.0.0.2", 00:24:00.376 "adrfam": "ipv4", 00:24:00.376 "trsvcid": "8009", 00:24:00.376 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:00.376 "wait_for_attach": true, 00:24:00.376 "method": "bdev_nvme_start_discovery", 00:24:00.376 "req_id": 1 00:24:00.376 } 00:24:00.376 Got JSON-RPC error response 00:24:00.376 response: 00:24:00.376 { 00:24:00.376 "code": -17, 00:24:00.376 "message": "File exists" 00:24:00.376 } 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 request: 00:24:00.376 { 00:24:00.376 "name": "nvme_second", 00:24:00.376 "trtype": "tcp", 00:24:00.376 "traddr": "10.0.0.2", 00:24:00.376 "adrfam": "ipv4", 00:24:00.376 "trsvcid": "8009", 00:24:00.376 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:00.376 "wait_for_attach": true, 00:24:00.376 "method": "bdev_nvme_start_discovery", 00:24:00.376 "req_id": 1 00:24:00.376 } 00:24:00.376 Got JSON-RPC error response 00:24:00.376 response: 00:24:00.376 { 00:24:00.376 "code": -17, 00:24:00.376 "message": "File exists" 00:24:00.376 } 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:00.376 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@642 -- # local es=0 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:00.633 23:49:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.565 [2024-07-15 23:49:50.425643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.565 [2024-07-15 23:49:50.425677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x70ea00 with addr=10.0.0.2, port=8010 00:24:01.565 [2024-07-15 23:49:50.425693] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:01.565 [2024-07-15 23:49:50.425700] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:01.565 [2024-07-15 23:49:50.425707] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:02.496 [2024-07-15 23:49:51.428076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:02.496 [2024-07-15 23:49:51.428104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x70ea00 with addr=10.0.0.2, port=8010 00:24:02.496 [2024-07-15 23:49:51.428116] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:02.496 [2024-07-15 23:49:51.428122] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:02.496 [2024-07-15 23:49:51.428128] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:03.866 [2024-07-15 23:49:52.430180] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:03.866 request: 00:24:03.866 { 00:24:03.866 "name": "nvme_second", 00:24:03.866 "trtype": "tcp", 00:24:03.866 "traddr": "10.0.0.2", 00:24:03.866 "adrfam": "ipv4", 00:24:03.866 "trsvcid": "8010", 00:24:03.866 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:03.866 "wait_for_attach": false, 00:24:03.866 "attach_timeout_ms": 3000, 00:24:03.866 "method": "bdev_nvme_start_discovery", 00:24:03.866 "req_id": 1 00:24:03.866 } 00:24:03.866 Got JSON-RPC error response 00:24:03.866 response: 00:24:03.866 { 00:24:03.866 "code": -110, 00:24:03.866 "message": "Connection timed out" 00:24:03.866 } 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@645 -- # es=1 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1109558 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:03.866 rmmod nvme_tcp 00:24:03.866 rmmod nvme_fabrics 00:24:03.866 rmmod nvme_keyring 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1109395 ']' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1109395 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@942 -- # '[' -z 1109395 ']' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # kill -0 1109395 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # uname 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1109395 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1109395' 00:24:03.866 killing process with pid 1109395 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@961 -- # kill 1109395 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # wait 1109395 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.866 23:49:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.460 23:49:54 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.460 00:24:06.460 real 0m17.289s 00:24:06.460 user 0m22.208s 00:24:06.460 sys 0m4.971s 00:24:06.460 23:49:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:06.460 23:49:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:06.460 ************************************ 00:24:06.460 END TEST nvmf_host_discovery 00:24:06.460 ************************************ 00:24:06.460 23:49:54 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:06.460 23:49:54 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:06.460 23:49:54 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:06.460 23:49:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:06.460 23:49:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:06.460 ************************************ 00:24:06.460 START TEST nvmf_host_multipath_status 00:24:06.460 ************************************ 00:24:06.460 23:49:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:06.460 * Looking for test storage... 00:24:06.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.460 23:49:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:11.727 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:11.727 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:11.727 Found net devices under 0000:86:00.0: cvl_0_0 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:11.727 Found net devices under 0000:86:00.1: cvl_0_1 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:11.727 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:11.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:11.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:24:11.728 00:24:11.728 --- 10.0.0.2 ping statistics --- 00:24:11.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.728 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:11.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:11.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:24:11.728 00:24:11.728 --- 10.0.0.1 ping statistics --- 00:24:11.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:11.728 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1114630 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1114630 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1114630 ']' 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:11.728 23:50:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:11.728 [2024-07-15 23:50:00.436947] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:11.728 [2024-07-15 23:50:00.436993] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.728 [2024-07-15 23:50:00.495120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:11.728 [2024-07-15 23:50:00.568092] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.728 [2024-07-15 23:50:00.568135] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.728 [2024-07-15 23:50:00.568142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.728 [2024-07-15 23:50:00.568147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.728 [2024-07-15 23:50:00.568152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.728 [2024-07-15 23:50:00.568249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.728 [2024-07-15 23:50:00.568250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.295 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:12.295 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:24:12.295 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:12.295 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:12.295 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:12.553 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.553 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1114630 00:24:12.553 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:12.553 [2024-07-15 23:50:01.428967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.553 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:12.835 Malloc0 00:24:12.835 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:12.835 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.092 23:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.351 [2024-07-15 23:50:02.107650] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:13.351 [2024-07-15 23:50:02.280095] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1114908 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1114908 /var/tmp/bdevperf.sock 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1114908 ']' 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:13.351 23:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:14.284 23:50:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:14.284 23:50:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:24:14.284 23:50:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:14.541 23:50:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:14.798 Nvme0n1 00:24:14.798 23:50:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:15.365 Nvme0n1 00:24:15.365 23:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:15.365 23:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:17.266 23:50:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:17.266 23:50:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:17.524 23:50:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:17.524 23:50:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.898 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.157 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.157 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.157 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.157 23:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.415 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.672 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.673 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:19.673 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:19.930 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:20.189 23:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:21.123 23:50:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:21.123 23:50:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:21.123 23:50:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.123 23:50:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.380 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.638 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.638 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.638 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.638 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:21.896 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.896 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:21.896 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.896 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.154 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.154 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.154 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.154 23:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.154 23:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.154 23:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:22.154 23:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:22.411 23:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:22.669 23:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:23.601 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:23.601 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:23.601 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.601 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:23.858 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.858 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:23.858 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.859 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.859 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.859 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.116 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.116 23:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.116 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.116 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.116 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.116 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.397 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.397 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:24.397 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.397 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.676 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.676 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:24.676 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.676 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.676 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.676 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:24.676 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:24.933 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:25.191 23:50:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:26.122 23:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:26.122 23:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:26.122 23:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.122 23:50:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.379 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.636 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.636 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.636 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.636 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.893 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.893 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:26.893 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.893 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.149 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.149 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:27.149 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.150 23:50:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.150 23:50:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.150 23:50:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:27.150 23:50:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:27.406 23:50:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:27.662 23:50:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:28.595 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:28.595 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:28.595 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.595 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.853 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.112 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.112 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.112 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.112 23:50:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.370 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.370 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:29.370 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.370 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:29.627 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.627 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:29.627 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.627 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.627 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.627 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:29.627 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:29.885 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.143 23:50:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:31.077 23:50:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:31.077 23:50:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:31.077 23:50:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:31.077 23:50:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.335 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.593 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.593 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.593 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:31.593 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.851 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.851 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:31.851 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.851 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.109 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.110 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:32.110 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.110 23:50:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:32.110 23:50:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.110 23:50:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:32.368 23:50:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:32.368 23:50:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:32.626 23:50:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:32.883 23:50:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:33.817 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:33.817 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:33.817 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.817 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.075 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.075 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.075 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.075 23:50:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:34.075 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.075 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:34.075 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.075 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.334 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.334 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.334 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.334 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.591 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.591 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:34.591 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.591 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.849 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.849 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.849 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.849 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.849 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.849 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:34.849 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.106 23:50:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:35.364 23:50:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:36.297 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:36.297 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:36.297 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.297 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.555 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.555 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:36.556 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.556 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.556 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.556 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.556 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.556 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.813 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.813 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.813 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.813 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.071 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.071 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:37.071 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.071 23:50:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:37.328 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.328 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:37.328 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.328 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.328 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.328 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:37.328 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.586 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:37.844 23:50:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:38.779 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:38.779 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:38.779 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.779 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.045 23:50:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:39.337 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.337 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.337 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.337 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.606 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.607 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:39.607 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.607 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:39.607 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.607 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:39.607 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.607 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:39.863 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.863 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:39.863 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:40.120 23:50:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:40.377 23:50:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:41.309 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:41.309 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:41.309 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.309 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.566 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:41.823 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:41.823 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:41.823 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:41.823 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:42.079 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.079 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:42.079 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.079 23:50:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1114908 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1114908 ']' 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1114908 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:42.337 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1114908 00:24:42.597 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:24:42.597 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:24:42.597 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1114908' 00:24:42.597 killing process with pid 1114908 00:24:42.597 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1114908 00:24:42.597 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1114908 00:24:42.597 Connection closed with partial response: 00:24:42.597 00:24:42.597 00:24:42.597 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1114908 00:24:42.597 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.597 [2024-07-15 23:50:02.343038] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:42.597 [2024-07-15 23:50:02.343089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1114908 ] 00:24:42.597 [2024-07-15 23:50:02.395443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.597 [2024-07-15 23:50:02.467812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.597 Running I/O for 90 seconds... 00:24:42.597 [2024-07-15 23:50:16.248128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.597 [2024-07-15 23:50:16.248166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.248532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.248540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.249111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.249130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:42.597 [2024-07-15 23:50:16.249146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.597 [2024-07-15 23:50:16.249154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.249795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.249802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.250756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.250768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.598 [2024-07-15 23:50:16.250785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.598 [2024-07-15 23:50:16.250792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.250977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.250992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.599 [2024-07-15 23:50:16.251608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.599 [2024-07-15 23:50:16.251615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.251992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.251999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.600 [2024-07-15 23:50:16.252388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:42.600 [2024-07-15 23:50:16.252405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:16.252670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:16.252677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.601 [2024-07-15 23:50:29.107583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.601 [2024-07-15 23:50:29.107668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.601 [2024-07-15 23:50:29.107681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.602 [2024-07-15 23:50:29.107770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:81600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.602 [2024-07-15 23:50:29.107870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.602 [2024-07-15 23:50:29.107892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.602 [2024-07-15 23:50:29.107912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.107986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.107994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.108963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.602 [2024-07-15 23:50:29.108985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.109011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.109031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.109051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.109071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.109090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.109113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.602 [2024-07-15 23:50:29.109135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:42.602 [2024-07-15 23:50:29.109301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.109326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.109346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.109365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.109385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.109405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.109424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.109445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.109452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.603 [2024-07-15 23:50:29.110591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:42.603 [2024-07-15 23:50:29.110745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:42.603 [2024-07-15 23:50:29.110753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:42.603 Received shutdown signal, test time was about 27.119566 seconds 00:24:42.603 00:24:42.603 Latency(us) 00:24:42.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.603 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:42.603 Verification LBA range: start 0x0 length 0x4000 00:24:42.603 Nvme0n1 : 27.12 10347.46 40.42 0.00 0.00 12350.74 616.18 3019898.88 00:24:42.603 =================================================================================================================== 00:24:42.603 Total : 10347.46 40.42 0.00 0.00 12350.74 616.18 3019898.88 00:24:42.603 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.861 rmmod nvme_tcp 00:24:42.861 rmmod nvme_fabrics 00:24:42.861 rmmod nvme_keyring 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1114630 ']' 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1114630 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1114630 ']' 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1114630 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1114630 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1114630' 00:24:42.861 killing process with pid 1114630 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1114630 00:24:42.861 23:50:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1114630 00:24:43.118 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:43.118 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:43.118 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:43.118 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.119 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.119 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.119 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.119 23:50:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.649 23:50:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:45.649 00:24:45.649 real 0m39.157s 00:24:45.649 user 1m46.231s 00:24:45.649 sys 0m10.508s 00:24:45.649 23:50:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:45.649 23:50:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:45.649 ************************************ 00:24:45.649 END TEST nvmf_host_multipath_status 00:24:45.649 ************************************ 00:24:45.649 23:50:34 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:24:45.649 23:50:34 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:45.649 23:50:34 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:45.649 23:50:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:45.649 23:50:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:45.649 ************************************ 00:24:45.649 START TEST nvmf_discovery_remove_ifc 00:24:45.649 ************************************ 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:45.649 * Looking for test storage... 00:24:45.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:45.649 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:45.650 23:50:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.906 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:50.907 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:50.907 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:50.907 Found net devices under 0000:86:00.0: cvl_0_0 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:50.907 Found net devices under 0000:86:00.1: cvl_0_1 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:50.907 00:24:50.907 --- 10.0.0.2 ping statistics --- 00:24:50.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.907 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:24:50.907 00:24:50.907 --- 10.0.0.1 ping statistics --- 00:24:50.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.907 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1123380 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1123380 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 1123380 ']' 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:50.907 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.908 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:50.908 23:50:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.908 [2024-07-15 23:50:39.529614] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:50.908 [2024-07-15 23:50:39.529658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.908 [2024-07-15 23:50:39.583197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.908 [2024-07-15 23:50:39.662364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.908 [2024-07-15 23:50:39.662398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.908 [2024-07-15 23:50:39.662405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.908 [2024-07-15 23:50:39.662411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.908 [2024-07-15 23:50:39.662416] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.908 [2024-07-15 23:50:39.662440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.472 [2024-07-15 23:50:40.377977] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.472 [2024-07-15 23:50:40.386092] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:51.472 null0 00:24:51.472 [2024-07-15 23:50:40.418110] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1123452 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1123452 /tmp/host.sock 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@823 -- # '[' -z 1123452 ']' 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # local rpc_addr=/tmp/host.sock 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:51.472 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:51.472 23:50:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.730 [2024-07-15 23:50:40.482872] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:51.730 [2024-07-15 23:50:40.482911] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123452 ] 00:24:51.730 [2024-07-15 23:50:40.535619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.730 [2024-07-15 23:50:40.614747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # return 0 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:52.661 23:50:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.589 [2024-07-15 23:50:42.436797] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:53.589 [2024-07-15 23:50:42.436817] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:53.589 [2024-07-15 23:50:42.436828] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:53.844 [2024-07-15 23:50:42.563227] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:53.844 [2024-07-15 23:50:42.620202] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:53.844 [2024-07-15 23:50:42.620249] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:53.844 [2024-07-15 23:50:42.620268] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:53.844 [2024-07-15 23:50:42.620280] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:53.844 [2024-07-15 23:50:42.620298] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.844 [2024-07-15 23:50:42.625317] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x153ce30 was disconnected and freed. delete nvme_qpair. 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.844 23:50:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.223 23:50:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.153 23:50:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.081 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.081 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.081 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.082 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.082 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:57.082 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.082 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.082 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:57.082 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.082 23:50:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.014 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.014 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.014 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.014 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:58.014 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.014 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.014 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.291 23:50:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:58.291 23:50:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:58.291 23:50:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:59.235 [2024-07-15 23:50:48.061652] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:59.235 [2024-07-15 23:50:48.061691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.235 [2024-07-15 23:50:48.061702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.235 [2024-07-15 23:50:48.061712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.235 [2024-07-15 23:50:48.061719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.235 [2024-07-15 23:50:48.061727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.235 [2024-07-15 23:50:48.061734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.235 [2024-07-15 23:50:48.061746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.235 [2024-07-15 23:50:48.061753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.235 [2024-07-15 23:50:48.061761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.235 [2024-07-15 23:50:48.061769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.235 [2024-07-15 23:50:48.061776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503690 is same with the state(5) to be set 00:24:59.235 [2024-07-15 23:50:48.071673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1503690 (9): Bad file descriptor 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:59.235 23:50:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.235 [2024-07-15 23:50:48.081711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.167 [2024-07-15 23:50:49.122251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:00.167 [2024-07-15 23:50:49.122292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1503690 with addr=10.0.0.2, port=4420 00:25:00.167 [2024-07-15 23:50:49.122306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1503690 is same with the state(5) to be set 00:25:00.167 [2024-07-15 23:50:49.122330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1503690 (9): Bad file descriptor 00:25:00.167 [2024-07-15 23:50:49.122730] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:00.167 [2024-07-15 23:50:49.122751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:00.167 [2024-07-15 23:50:49.122760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:00.167 [2024-07-15 23:50:49.122770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:00.167 [2024-07-15 23:50:49.122789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.167 [2024-07-15 23:50:49.122799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:00.167 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:00.424 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:00.424 23:50:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.353 [2024-07-15 23:50:50.125279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:01.353 [2024-07-15 23:50:50.125303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:01.353 [2024-07-15 23:50:50.125310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:01.354 [2024-07-15 23:50:50.125320] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:01.354 [2024-07-15 23:50:50.125333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.354 [2024-07-15 23:50:50.125350] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:01.354 [2024-07-15 23:50:50.125374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.354 [2024-07-15 23:50:50.125383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.354 [2024-07-15 23:50:50.125392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.354 [2024-07-15 23:50:50.125399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.354 [2024-07-15 23:50:50.125406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.354 [2024-07-15 23:50:50.125414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.354 [2024-07-15 23:50:50.125422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.354 [2024-07-15 23:50:50.125428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.354 [2024-07-15 23:50:50.125435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:01.354 [2024-07-15 23:50:50.125442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:01.354 [2024-07-15 23:50:50.125448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:01.354 [2024-07-15 23:50:50.125686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1502a80 (9): Bad file descriptor 00:25:01.354 [2024-07-15 23:50:50.126696] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:01.354 [2024-07-15 23:50:50.126708] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:01.354 23:50:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:02.722 23:50:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:03.286 [2024-07-15 23:50:52.181761] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:03.286 [2024-07-15 23:50:52.181779] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:03.286 [2024-07-15 23:50:52.181791] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.544 [2024-07-15 23:50:52.268057] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:03.544 [2024-07-15 23:50:52.364385] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:03.544 [2024-07-15 23:50:52.364420] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:03.544 [2024-07-15 23:50:52.364436] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:03.544 [2024-07-15 23:50:52.364448] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:03.544 [2024-07-15 23:50:52.364455] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:03.544 [2024-07-15 23:50:52.370442] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x15198d0 was disconnected and freed. delete nvme_qpair. 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1123452 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 1123452 ']' 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 1123452 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1123452 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1123452' 00:25:03.544 killing process with pid 1123452 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 1123452 00:25:03.544 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 1123452 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.801 rmmod nvme_tcp 00:25:03.801 rmmod nvme_fabrics 00:25:03.801 rmmod nvme_keyring 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1123380 ']' 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1123380 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@942 -- # '[' -z 1123380 ']' 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # kill -0 1123380 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # uname 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1123380 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1123380' 00:25:03.801 killing process with pid 1123380 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # kill 1123380 00:25:03.801 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # wait 1123380 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:04.060 23:50:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.590 23:50:54 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:06.590 00:25:06.590 real 0m20.828s 00:25:06.590 user 0m26.595s 00:25:06.590 sys 0m5.124s 00:25:06.590 23:50:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:06.590 23:50:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:06.590 ************************************ 00:25:06.590 END TEST nvmf_discovery_remove_ifc 00:25:06.590 ************************************ 00:25:06.590 23:50:55 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:25:06.590 23:50:55 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:06.590 23:50:55 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:06.590 23:50:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:06.590 23:50:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:06.590 ************************************ 00:25:06.590 START TEST nvmf_identify_kernel_target 00:25:06.590 ************************************ 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:06.590 * Looking for test storage... 00:25:06.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.590 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:06.591 23:50:55 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.848 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:11.849 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:11.849 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:11.849 Found net devices under 0000:86:00.0: cvl_0_0 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:11.849 Found net devices under 0000:86:00.1: cvl_0_1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:11.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:25:11.849 00:25:11.849 --- 10.0.0.2 ping statistics --- 00:25:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.849 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:25:11.849 00:25:11.849 --- 10.0.0.1 ping statistics --- 00:25:11.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.849 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:11.849 23:51:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:14.380 Waiting for block devices as requested 00:25:14.380 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:14.380 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:14.380 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:14.638 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:14.638 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:14.638 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:14.638 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:14.897 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:14.897 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:14.897 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:14.897 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:15.155 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:15.155 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:15.155 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:15.414 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:15.414 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:15.414 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:15.414 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:15.674 No valid GPT data, bailing 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:15.674 00:25:15.674 Discovery Log Number of Records 2, Generation counter 2 00:25:15.674 =====Discovery Log Entry 0====== 00:25:15.674 trtype: tcp 00:25:15.674 adrfam: ipv4 00:25:15.674 subtype: current discovery subsystem 00:25:15.674 treq: not specified, sq flow control disable supported 00:25:15.674 portid: 1 00:25:15.674 trsvcid: 4420 00:25:15.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:15.674 traddr: 10.0.0.1 00:25:15.674 eflags: none 00:25:15.674 sectype: none 00:25:15.674 =====Discovery Log Entry 1====== 00:25:15.674 trtype: tcp 00:25:15.674 adrfam: ipv4 00:25:15.674 subtype: nvme subsystem 00:25:15.674 treq: not specified, sq flow control disable supported 00:25:15.674 portid: 1 00:25:15.674 trsvcid: 4420 00:25:15.674 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:15.674 traddr: 10.0.0.1 00:25:15.674 eflags: none 00:25:15.674 sectype: none 00:25:15.674 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:15.674 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:15.674 ===================================================== 00:25:15.674 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:15.674 ===================================================== 00:25:15.674 Controller Capabilities/Features 00:25:15.674 ================================ 00:25:15.674 Vendor ID: 0000 00:25:15.674 Subsystem Vendor ID: 0000 00:25:15.674 Serial Number: 309b79ac28f8924490ef 00:25:15.674 Model Number: Linux 00:25:15.674 Firmware Version: 6.7.0-68 00:25:15.674 Recommended Arb Burst: 0 00:25:15.674 IEEE OUI Identifier: 00 00 00 00:25:15.674 Multi-path I/O 00:25:15.674 May have multiple subsystem ports: No 00:25:15.674 May have multiple controllers: No 00:25:15.674 Associated with SR-IOV VF: No 00:25:15.674 Max Data Transfer Size: Unlimited 00:25:15.674 Max Number of Namespaces: 0 00:25:15.674 Max Number of I/O Queues: 1024 00:25:15.674 NVMe Specification Version (VS): 1.3 00:25:15.674 NVMe Specification Version (Identify): 1.3 00:25:15.674 Maximum Queue Entries: 1024 00:25:15.674 Contiguous Queues Required: No 00:25:15.674 Arbitration Mechanisms Supported 00:25:15.674 Weighted Round Robin: Not Supported 00:25:15.674 Vendor Specific: Not Supported 00:25:15.674 Reset Timeout: 7500 ms 00:25:15.674 Doorbell Stride: 4 bytes 00:25:15.674 NVM Subsystem Reset: Not Supported 00:25:15.674 Command Sets Supported 00:25:15.674 NVM Command Set: Supported 00:25:15.674 Boot Partition: Not Supported 00:25:15.674 Memory Page Size Minimum: 4096 bytes 00:25:15.674 Memory Page Size Maximum: 4096 bytes 00:25:15.674 Persistent Memory Region: Not Supported 00:25:15.674 Optional Asynchronous Events Supported 00:25:15.674 Namespace Attribute Notices: Not Supported 00:25:15.674 Firmware Activation Notices: Not Supported 00:25:15.674 ANA Change Notices: Not Supported 00:25:15.674 PLE Aggregate Log Change Notices: Not Supported 00:25:15.674 LBA Status Info Alert Notices: Not Supported 00:25:15.674 EGE Aggregate Log Change Notices: Not Supported 00:25:15.674 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.674 Zone Descriptor Change Notices: Not Supported 00:25:15.674 Discovery Log Change Notices: Supported 00:25:15.674 Controller Attributes 00:25:15.674 128-bit Host Identifier: Not Supported 00:25:15.674 Non-Operational Permissive Mode: Not Supported 00:25:15.674 NVM Sets: Not Supported 00:25:15.674 Read Recovery Levels: Not Supported 00:25:15.674 Endurance Groups: Not Supported 00:25:15.674 Predictable Latency Mode: Not Supported 00:25:15.674 Traffic Based Keep ALive: Not Supported 00:25:15.674 Namespace Granularity: Not Supported 00:25:15.674 SQ Associations: Not Supported 00:25:15.674 UUID List: Not Supported 00:25:15.674 Multi-Domain Subsystem: Not Supported 00:25:15.674 Fixed Capacity Management: Not Supported 00:25:15.674 Variable Capacity Management: Not Supported 00:25:15.674 Delete Endurance Group: Not Supported 00:25:15.674 Delete NVM Set: Not Supported 00:25:15.674 Extended LBA Formats Supported: Not Supported 00:25:15.674 Flexible Data Placement Supported: Not Supported 00:25:15.674 00:25:15.674 Controller Memory Buffer Support 00:25:15.674 ================================ 00:25:15.674 Supported: No 00:25:15.674 00:25:15.674 Persistent Memory Region Support 00:25:15.674 ================================ 00:25:15.674 Supported: No 00:25:15.674 00:25:15.674 Admin Command Set Attributes 00:25:15.674 ============================ 00:25:15.674 Security Send/Receive: Not Supported 00:25:15.674 Format NVM: Not Supported 00:25:15.674 Firmware Activate/Download: Not Supported 00:25:15.674 Namespace Management: Not Supported 00:25:15.674 Device Self-Test: Not Supported 00:25:15.674 Directives: Not Supported 00:25:15.674 NVMe-MI: Not Supported 00:25:15.674 Virtualization Management: Not Supported 00:25:15.674 Doorbell Buffer Config: Not Supported 00:25:15.674 Get LBA Status Capability: Not Supported 00:25:15.674 Command & Feature Lockdown Capability: Not Supported 00:25:15.674 Abort Command Limit: 1 00:25:15.674 Async Event Request Limit: 1 00:25:15.674 Number of Firmware Slots: N/A 00:25:15.674 Firmware Slot 1 Read-Only: N/A 00:25:15.674 Firmware Activation Without Reset: N/A 00:25:15.674 Multiple Update Detection Support: N/A 00:25:15.674 Firmware Update Granularity: No Information Provided 00:25:15.674 Per-Namespace SMART Log: No 00:25:15.674 Asymmetric Namespace Access Log Page: Not Supported 00:25:15.674 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:15.674 Command Effects Log Page: Not Supported 00:25:15.674 Get Log Page Extended Data: Supported 00:25:15.674 Telemetry Log Pages: Not Supported 00:25:15.674 Persistent Event Log Pages: Not Supported 00:25:15.674 Supported Log Pages Log Page: May Support 00:25:15.674 Commands Supported & Effects Log Page: Not Supported 00:25:15.674 Feature Identifiers & Effects Log Page:May Support 00:25:15.674 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.674 Data Area 4 for Telemetry Log: Not Supported 00:25:15.674 Error Log Page Entries Supported: 1 00:25:15.674 Keep Alive: Not Supported 00:25:15.674 00:25:15.674 NVM Command Set Attributes 00:25:15.674 ========================== 00:25:15.674 Submission Queue Entry Size 00:25:15.674 Max: 1 00:25:15.674 Min: 1 00:25:15.674 Completion Queue Entry Size 00:25:15.674 Max: 1 00:25:15.674 Min: 1 00:25:15.674 Number of Namespaces: 0 00:25:15.674 Compare Command: Not Supported 00:25:15.674 Write Uncorrectable Command: Not Supported 00:25:15.674 Dataset Management Command: Not Supported 00:25:15.674 Write Zeroes Command: Not Supported 00:25:15.674 Set Features Save Field: Not Supported 00:25:15.674 Reservations: Not Supported 00:25:15.674 Timestamp: Not Supported 00:25:15.674 Copy: Not Supported 00:25:15.674 Volatile Write Cache: Not Present 00:25:15.674 Atomic Write Unit (Normal): 1 00:25:15.674 Atomic Write Unit (PFail): 1 00:25:15.674 Atomic Compare & Write Unit: 1 00:25:15.674 Fused Compare & Write: Not Supported 00:25:15.674 Scatter-Gather List 00:25:15.674 SGL Command Set: Supported 00:25:15.674 SGL Keyed: Not Supported 00:25:15.674 SGL Bit Bucket Descriptor: Not Supported 00:25:15.674 SGL Metadata Pointer: Not Supported 00:25:15.674 Oversized SGL: Not Supported 00:25:15.674 SGL Metadata Address: Not Supported 00:25:15.674 SGL Offset: Supported 00:25:15.674 Transport SGL Data Block: Not Supported 00:25:15.675 Replay Protected Memory Block: Not Supported 00:25:15.675 00:25:15.675 Firmware Slot Information 00:25:15.675 ========================= 00:25:15.675 Active slot: 0 00:25:15.675 00:25:15.675 00:25:15.675 Error Log 00:25:15.675 ========= 00:25:15.675 00:25:15.675 Active Namespaces 00:25:15.675 ================= 00:25:15.675 Discovery Log Page 00:25:15.675 ================== 00:25:15.675 Generation Counter: 2 00:25:15.675 Number of Records: 2 00:25:15.675 Record Format: 0 00:25:15.675 00:25:15.675 Discovery Log Entry 0 00:25:15.675 ---------------------- 00:25:15.675 Transport Type: 3 (TCP) 00:25:15.675 Address Family: 1 (IPv4) 00:25:15.675 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:15.675 Entry Flags: 00:25:15.675 Duplicate Returned Information: 0 00:25:15.675 Explicit Persistent Connection Support for Discovery: 0 00:25:15.675 Transport Requirements: 00:25:15.675 Secure Channel: Not Specified 00:25:15.675 Port ID: 1 (0x0001) 00:25:15.675 Controller ID: 65535 (0xffff) 00:25:15.675 Admin Max SQ Size: 32 00:25:15.675 Transport Service Identifier: 4420 00:25:15.675 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:15.675 Transport Address: 10.0.0.1 00:25:15.675 Discovery Log Entry 1 00:25:15.675 ---------------------- 00:25:15.675 Transport Type: 3 (TCP) 00:25:15.675 Address Family: 1 (IPv4) 00:25:15.675 Subsystem Type: 2 (NVM Subsystem) 00:25:15.675 Entry Flags: 00:25:15.675 Duplicate Returned Information: 0 00:25:15.675 Explicit Persistent Connection Support for Discovery: 0 00:25:15.675 Transport Requirements: 00:25:15.675 Secure Channel: Not Specified 00:25:15.675 Port ID: 1 (0x0001) 00:25:15.675 Controller ID: 65535 (0xffff) 00:25:15.675 Admin Max SQ Size: 32 00:25:15.675 Transport Service Identifier: 4420 00:25:15.675 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:15.675 Transport Address: 10.0.0.1 00:25:15.675 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:15.935 get_feature(0x01) failed 00:25:15.935 get_feature(0x02) failed 00:25:15.935 get_feature(0x04) failed 00:25:15.935 ===================================================== 00:25:15.935 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:15.935 ===================================================== 00:25:15.935 Controller Capabilities/Features 00:25:15.935 ================================ 00:25:15.935 Vendor ID: 0000 00:25:15.935 Subsystem Vendor ID: 0000 00:25:15.935 Serial Number: 526b47288e92bf11e209 00:25:15.935 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:15.935 Firmware Version: 6.7.0-68 00:25:15.935 Recommended Arb Burst: 6 00:25:15.935 IEEE OUI Identifier: 00 00 00 00:25:15.935 Multi-path I/O 00:25:15.935 May have multiple subsystem ports: Yes 00:25:15.935 May have multiple controllers: Yes 00:25:15.935 Associated with SR-IOV VF: No 00:25:15.935 Max Data Transfer Size: Unlimited 00:25:15.935 Max Number of Namespaces: 1024 00:25:15.935 Max Number of I/O Queues: 128 00:25:15.935 NVMe Specification Version (VS): 1.3 00:25:15.935 NVMe Specification Version (Identify): 1.3 00:25:15.935 Maximum Queue Entries: 1024 00:25:15.935 Contiguous Queues Required: No 00:25:15.935 Arbitration Mechanisms Supported 00:25:15.935 Weighted Round Robin: Not Supported 00:25:15.935 Vendor Specific: Not Supported 00:25:15.935 Reset Timeout: 7500 ms 00:25:15.935 Doorbell Stride: 4 bytes 00:25:15.935 NVM Subsystem Reset: Not Supported 00:25:15.935 Command Sets Supported 00:25:15.935 NVM Command Set: Supported 00:25:15.935 Boot Partition: Not Supported 00:25:15.935 Memory Page Size Minimum: 4096 bytes 00:25:15.935 Memory Page Size Maximum: 4096 bytes 00:25:15.935 Persistent Memory Region: Not Supported 00:25:15.935 Optional Asynchronous Events Supported 00:25:15.935 Namespace Attribute Notices: Supported 00:25:15.935 Firmware Activation Notices: Not Supported 00:25:15.935 ANA Change Notices: Supported 00:25:15.935 PLE Aggregate Log Change Notices: Not Supported 00:25:15.935 LBA Status Info Alert Notices: Not Supported 00:25:15.935 EGE Aggregate Log Change Notices: Not Supported 00:25:15.935 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.935 Zone Descriptor Change Notices: Not Supported 00:25:15.935 Discovery Log Change Notices: Not Supported 00:25:15.935 Controller Attributes 00:25:15.935 128-bit Host Identifier: Supported 00:25:15.935 Non-Operational Permissive Mode: Not Supported 00:25:15.935 NVM Sets: Not Supported 00:25:15.935 Read Recovery Levels: Not Supported 00:25:15.935 Endurance Groups: Not Supported 00:25:15.935 Predictable Latency Mode: Not Supported 00:25:15.935 Traffic Based Keep ALive: Supported 00:25:15.935 Namespace Granularity: Not Supported 00:25:15.935 SQ Associations: Not Supported 00:25:15.935 UUID List: Not Supported 00:25:15.935 Multi-Domain Subsystem: Not Supported 00:25:15.935 Fixed Capacity Management: Not Supported 00:25:15.935 Variable Capacity Management: Not Supported 00:25:15.935 Delete Endurance Group: Not Supported 00:25:15.935 Delete NVM Set: Not Supported 00:25:15.935 Extended LBA Formats Supported: Not Supported 00:25:15.935 Flexible Data Placement Supported: Not Supported 00:25:15.935 00:25:15.935 Controller Memory Buffer Support 00:25:15.935 ================================ 00:25:15.935 Supported: No 00:25:15.935 00:25:15.935 Persistent Memory Region Support 00:25:15.935 ================================ 00:25:15.935 Supported: No 00:25:15.935 00:25:15.935 Admin Command Set Attributes 00:25:15.935 ============================ 00:25:15.935 Security Send/Receive: Not Supported 00:25:15.935 Format NVM: Not Supported 00:25:15.935 Firmware Activate/Download: Not Supported 00:25:15.935 Namespace Management: Not Supported 00:25:15.935 Device Self-Test: Not Supported 00:25:15.935 Directives: Not Supported 00:25:15.935 NVMe-MI: Not Supported 00:25:15.935 Virtualization Management: Not Supported 00:25:15.935 Doorbell Buffer Config: Not Supported 00:25:15.935 Get LBA Status Capability: Not Supported 00:25:15.935 Command & Feature Lockdown Capability: Not Supported 00:25:15.935 Abort Command Limit: 4 00:25:15.935 Async Event Request Limit: 4 00:25:15.935 Number of Firmware Slots: N/A 00:25:15.935 Firmware Slot 1 Read-Only: N/A 00:25:15.935 Firmware Activation Without Reset: N/A 00:25:15.935 Multiple Update Detection Support: N/A 00:25:15.935 Firmware Update Granularity: No Information Provided 00:25:15.935 Per-Namespace SMART Log: Yes 00:25:15.935 Asymmetric Namespace Access Log Page: Supported 00:25:15.935 ANA Transition Time : 10 sec 00:25:15.935 00:25:15.935 Asymmetric Namespace Access Capabilities 00:25:15.935 ANA Optimized State : Supported 00:25:15.935 ANA Non-Optimized State : Supported 00:25:15.935 ANA Inaccessible State : Supported 00:25:15.935 ANA Persistent Loss State : Supported 00:25:15.935 ANA Change State : Supported 00:25:15.935 ANAGRPID is not changed : No 00:25:15.935 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:15.935 00:25:15.935 ANA Group Identifier Maximum : 128 00:25:15.935 Number of ANA Group Identifiers : 128 00:25:15.935 Max Number of Allowed Namespaces : 1024 00:25:15.935 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:15.935 Command Effects Log Page: Supported 00:25:15.935 Get Log Page Extended Data: Supported 00:25:15.935 Telemetry Log Pages: Not Supported 00:25:15.935 Persistent Event Log Pages: Not Supported 00:25:15.935 Supported Log Pages Log Page: May Support 00:25:15.935 Commands Supported & Effects Log Page: Not Supported 00:25:15.935 Feature Identifiers & Effects Log Page:May Support 00:25:15.935 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.935 Data Area 4 for Telemetry Log: Not Supported 00:25:15.935 Error Log Page Entries Supported: 128 00:25:15.935 Keep Alive: Supported 00:25:15.935 Keep Alive Granularity: 1000 ms 00:25:15.935 00:25:15.935 NVM Command Set Attributes 00:25:15.935 ========================== 00:25:15.935 Submission Queue Entry Size 00:25:15.935 Max: 64 00:25:15.935 Min: 64 00:25:15.935 Completion Queue Entry Size 00:25:15.935 Max: 16 00:25:15.935 Min: 16 00:25:15.935 Number of Namespaces: 1024 00:25:15.935 Compare Command: Not Supported 00:25:15.935 Write Uncorrectable Command: Not Supported 00:25:15.935 Dataset Management Command: Supported 00:25:15.935 Write Zeroes Command: Supported 00:25:15.935 Set Features Save Field: Not Supported 00:25:15.935 Reservations: Not Supported 00:25:15.935 Timestamp: Not Supported 00:25:15.935 Copy: Not Supported 00:25:15.935 Volatile Write Cache: Present 00:25:15.935 Atomic Write Unit (Normal): 1 00:25:15.935 Atomic Write Unit (PFail): 1 00:25:15.935 Atomic Compare & Write Unit: 1 00:25:15.935 Fused Compare & Write: Not Supported 00:25:15.935 Scatter-Gather List 00:25:15.936 SGL Command Set: Supported 00:25:15.936 SGL Keyed: Not Supported 00:25:15.936 SGL Bit Bucket Descriptor: Not Supported 00:25:15.936 SGL Metadata Pointer: Not Supported 00:25:15.936 Oversized SGL: Not Supported 00:25:15.936 SGL Metadata Address: Not Supported 00:25:15.936 SGL Offset: Supported 00:25:15.936 Transport SGL Data Block: Not Supported 00:25:15.936 Replay Protected Memory Block: Not Supported 00:25:15.936 00:25:15.936 Firmware Slot Information 00:25:15.936 ========================= 00:25:15.936 Active slot: 0 00:25:15.936 00:25:15.936 Asymmetric Namespace Access 00:25:15.936 =========================== 00:25:15.936 Change Count : 0 00:25:15.936 Number of ANA Group Descriptors : 1 00:25:15.936 ANA Group Descriptor : 0 00:25:15.936 ANA Group ID : 1 00:25:15.936 Number of NSID Values : 1 00:25:15.936 Change Count : 0 00:25:15.936 ANA State : 1 00:25:15.936 Namespace Identifier : 1 00:25:15.936 00:25:15.936 Commands Supported and Effects 00:25:15.936 ============================== 00:25:15.936 Admin Commands 00:25:15.936 -------------- 00:25:15.936 Get Log Page (02h): Supported 00:25:15.936 Identify (06h): Supported 00:25:15.936 Abort (08h): Supported 00:25:15.936 Set Features (09h): Supported 00:25:15.936 Get Features (0Ah): Supported 00:25:15.936 Asynchronous Event Request (0Ch): Supported 00:25:15.936 Keep Alive (18h): Supported 00:25:15.936 I/O Commands 00:25:15.936 ------------ 00:25:15.936 Flush (00h): Supported 00:25:15.936 Write (01h): Supported LBA-Change 00:25:15.936 Read (02h): Supported 00:25:15.936 Write Zeroes (08h): Supported LBA-Change 00:25:15.936 Dataset Management (09h): Supported 00:25:15.936 00:25:15.936 Error Log 00:25:15.936 ========= 00:25:15.936 Entry: 0 00:25:15.936 Error Count: 0x3 00:25:15.936 Submission Queue Id: 0x0 00:25:15.936 Command Id: 0x5 00:25:15.936 Phase Bit: 0 00:25:15.936 Status Code: 0x2 00:25:15.936 Status Code Type: 0x0 00:25:15.936 Do Not Retry: 1 00:25:15.936 Error Location: 0x28 00:25:15.936 LBA: 0x0 00:25:15.936 Namespace: 0x0 00:25:15.936 Vendor Log Page: 0x0 00:25:15.936 ----------- 00:25:15.936 Entry: 1 00:25:15.936 Error Count: 0x2 00:25:15.936 Submission Queue Id: 0x0 00:25:15.936 Command Id: 0x5 00:25:15.936 Phase Bit: 0 00:25:15.936 Status Code: 0x2 00:25:15.936 Status Code Type: 0x0 00:25:15.936 Do Not Retry: 1 00:25:15.936 Error Location: 0x28 00:25:15.936 LBA: 0x0 00:25:15.936 Namespace: 0x0 00:25:15.936 Vendor Log Page: 0x0 00:25:15.936 ----------- 00:25:15.936 Entry: 2 00:25:15.936 Error Count: 0x1 00:25:15.936 Submission Queue Id: 0x0 00:25:15.936 Command Id: 0x4 00:25:15.936 Phase Bit: 0 00:25:15.936 Status Code: 0x2 00:25:15.936 Status Code Type: 0x0 00:25:15.936 Do Not Retry: 1 00:25:15.936 Error Location: 0x28 00:25:15.936 LBA: 0x0 00:25:15.936 Namespace: 0x0 00:25:15.936 Vendor Log Page: 0x0 00:25:15.936 00:25:15.936 Number of Queues 00:25:15.936 ================ 00:25:15.936 Number of I/O Submission Queues: 128 00:25:15.936 Number of I/O Completion Queues: 128 00:25:15.936 00:25:15.936 ZNS Specific Controller Data 00:25:15.936 ============================ 00:25:15.936 Zone Append Size Limit: 0 00:25:15.936 00:25:15.936 00:25:15.936 Active Namespaces 00:25:15.936 ================= 00:25:15.936 get_feature(0x05) failed 00:25:15.936 Namespace ID:1 00:25:15.936 Command Set Identifier: NVM (00h) 00:25:15.936 Deallocate: Supported 00:25:15.936 Deallocated/Unwritten Error: Not Supported 00:25:15.936 Deallocated Read Value: Unknown 00:25:15.936 Deallocate in Write Zeroes: Not Supported 00:25:15.936 Deallocated Guard Field: 0xFFFF 00:25:15.936 Flush: Supported 00:25:15.936 Reservation: Not Supported 00:25:15.936 Namespace Sharing Capabilities: Multiple Controllers 00:25:15.936 Size (in LBAs): 1953525168 (931GiB) 00:25:15.936 Capacity (in LBAs): 1953525168 (931GiB) 00:25:15.936 Utilization (in LBAs): 1953525168 (931GiB) 00:25:15.936 UUID: edd9e854-b4cc-4d05-8df6-12bb60f1de76 00:25:15.936 Thin Provisioning: Not Supported 00:25:15.936 Per-NS Atomic Units: Yes 00:25:15.936 Atomic Boundary Size (Normal): 0 00:25:15.936 Atomic Boundary Size (PFail): 0 00:25:15.936 Atomic Boundary Offset: 0 00:25:15.936 NGUID/EUI64 Never Reused: No 00:25:15.936 ANA group ID: 1 00:25:15.936 Namespace Write Protected: No 00:25:15.936 Number of LBA Formats: 1 00:25:15.936 Current LBA Format: LBA Format #00 00:25:15.936 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:15.936 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.936 rmmod nvme_tcp 00:25:15.936 rmmod nvme_fabrics 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.936 23:51:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.841 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:17.841 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:17.841 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:17.841 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:18.100 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:18.100 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:18.100 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:18.100 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:18.100 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:18.100 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:18.100 23:51:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:20.676 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:20.676 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:20.934 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:21.501 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:21.760 00:25:21.760 real 0m15.526s 00:25:21.760 user 0m3.746s 00:25:21.760 sys 0m8.065s 00:25:21.760 23:51:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:21.760 23:51:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.760 ************************************ 00:25:21.760 END TEST nvmf_identify_kernel_target 00:25:21.760 ************************************ 00:25:21.760 23:51:10 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:25:21.760 23:51:10 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:21.760 23:51:10 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:25:21.760 23:51:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:25:21.760 23:51:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.760 ************************************ 00:25:21.760 START TEST nvmf_auth_host 00:25:21.760 ************************************ 00:25:21.760 23:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:21.760 * Looking for test storage... 00:25:22.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.019 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:22.020 23:51:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:27.286 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:27.286 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:27.286 Found net devices under 0000:86:00.0: cvl_0_0 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:27.286 Found net devices under 0000:86:00.1: cvl_0_1 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:27.286 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:27.287 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:27.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:25:27.544 00:25:27.544 --- 10.0.0.2 ping statistics --- 00:25:27.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.544 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:25:27.544 00:25:27.544 --- 10.0.0.1 ping statistics --- 00:25:27.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.544 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1135843 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1135843 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1135843 ']' 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:27.544 23:51:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2f766deb812013ff24b6ecbdcb3039b2 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GL0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2f766deb812013ff24b6ecbdcb3039b2 0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2f766deb812013ff24b6ecbdcb3039b2 0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2f766deb812013ff24b6ecbdcb3039b2 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GL0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GL0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.GL0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=84c4ceba4d31a9200cdce516c90eb5d1194e7ecfa0ab7194463bc3bbab9079ce 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.LYg 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 84c4ceba4d31a9200cdce516c90eb5d1194e7ecfa0ab7194463bc3bbab9079ce 3 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 84c4ceba4d31a9200cdce516c90eb5d1194e7ecfa0ab7194463bc3bbab9079ce 3 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=84c4ceba4d31a9200cdce516c90eb5d1194e7ecfa0ab7194463bc3bbab9079ce 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.LYg 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.LYg 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.LYg 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0afc1d96f95986441ac9621d5883281720ad8c47fd4119f7 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jWO 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0afc1d96f95986441ac9621d5883281720ad8c47fd4119f7 0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0afc1d96f95986441ac9621d5883281720ad8c47fd4119f7 0 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.475 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0afc1d96f95986441ac9621d5883281720ad8c47fd4119f7 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jWO 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jWO 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jWO 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9664388fb3dc5fc21f19bdf3c89a4ab85c09abe52be229c1 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IoT 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9664388fb3dc5fc21f19bdf3c89a4ab85c09abe52be229c1 2 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9664388fb3dc5fc21f19bdf3c89a4ab85c09abe52be229c1 2 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9664388fb3dc5fc21f19bdf3c89a4ab85c09abe52be229c1 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:28.476 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IoT 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IoT 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IoT 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c8d15c1a0e92401f8205cbd42d439d32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.sEn 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c8d15c1a0e92401f8205cbd42d439d32 1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c8d15c1a0e92401f8205cbd42d439d32 1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c8d15c1a0e92401f8205cbd42d439d32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.sEn 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.sEn 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.sEn 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d8ebe2f7f825fca15025cfccfe49021 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Jhr 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d8ebe2f7f825fca15025cfccfe49021 1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d8ebe2f7f825fca15025cfccfe49021 1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d8ebe2f7f825fca15025cfccfe49021 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Jhr 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Jhr 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Jhr 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=16ab26da3ae95479c436a3857528ec0ab7333d2216c3b09b 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.POm 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 16ab26da3ae95479c436a3857528ec0ab7333d2216c3b09b 2 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 16ab26da3ae95479c436a3857528ec0ab7333d2216c3b09b 2 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=16ab26da3ae95479c436a3857528ec0ab7333d2216c3b09b 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.POm 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.POm 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.POm 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=af59c0c390f3fc3263e03087f7b9b943 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.a9p 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key af59c0c390f3fc3263e03087f7b9b943 0 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 af59c0c390f3fc3263e03087f7b9b943 0 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=af59c0c390f3fc3263e03087f7b9b943 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.a9p 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.a9p 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.a9p 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c6c877b9fd0cc3b9dbddeb19fcc2e242094b005964d3a1dff9d34d10a07d7827 00:25:28.733 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sYU 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c6c877b9fd0cc3b9dbddeb19fcc2e242094b005964d3a1dff9d34d10a07d7827 3 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c6c877b9fd0cc3b9dbddeb19fcc2e242094b005964d3a1dff9d34d10a07d7827 3 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c6c877b9fd0cc3b9dbddeb19fcc2e242094b005964d3a1dff9d34d10a07d7827 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sYU 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sYU 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.sYU 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1135843 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1135843 ']' 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.GL0 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.LYg ]] 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.LYg 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jWO 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:28.990 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IoT ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IoT 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.sEn 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Jhr ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Jhr 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.POm 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.a9p ]] 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.a9p 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:29.246 23:51:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.sYU 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:29.246 23:51:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:31.812 Waiting for block devices as requested 00:25:31.812 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:31.812 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:31.812 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:31.812 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:31.812 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:31.812 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:32.069 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:32.069 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:32.069 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:32.326 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:32.326 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:32.326 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:32.326 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:32.583 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:32.583 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:32.583 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:32.839 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:33.403 No valid GPT data, bailing 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:33.403 00:25:33.403 Discovery Log Number of Records 2, Generation counter 2 00:25:33.403 =====Discovery Log Entry 0====== 00:25:33.403 trtype: tcp 00:25:33.403 adrfam: ipv4 00:25:33.403 subtype: current discovery subsystem 00:25:33.403 treq: not specified, sq flow control disable supported 00:25:33.403 portid: 1 00:25:33.403 trsvcid: 4420 00:25:33.403 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:33.403 traddr: 10.0.0.1 00:25:33.403 eflags: none 00:25:33.403 sectype: none 00:25:33.403 =====Discovery Log Entry 1====== 00:25:33.403 trtype: tcp 00:25:33.403 adrfam: ipv4 00:25:33.403 subtype: nvme subsystem 00:25:33.403 treq: not specified, sq flow control disable supported 00:25:33.403 portid: 1 00:25:33.403 trsvcid: 4420 00:25:33.403 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:33.403 traddr: 10.0.0.1 00:25:33.403 eflags: none 00:25:33.403 sectype: none 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.403 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.404 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.661 nvme0n1 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.661 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.919 nvme0n1 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:33.919 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 nvme0n1 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.177 23:51:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.177 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.435 nvme0n1 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.435 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.436 nvme0n1 00:25:34.436 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.436 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.436 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.436 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.436 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.693 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.694 nvme0n1 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.694 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.951 nvme0n1 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:34.951 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:35.208 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.209 23:51:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.209 nvme0n1 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.209 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.466 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.466 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.466 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.466 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.467 nvme0n1 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.467 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.724 nvme0n1 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.724 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.982 nvme0n1 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:35.982 23:51:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.239 nvme0n1 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.239 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.497 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.755 nvme0n1 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:36.755 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:36.756 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.015 nvme0n1 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.015 23:51:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.306 nvme0n1 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.306 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.307 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.307 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.307 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.307 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.307 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.307 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.576 nvme0n1 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:37.576 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.142 nvme0n1 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.142 23:51:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.399 nvme0n1 00:25:38.399 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.399 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.399 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.399 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.399 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.399 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.657 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.914 nvme0n1 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.914 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:38.915 23:51:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.480 nvme0n1 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.480 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.738 nvme0n1 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.738 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:39.997 23:51:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.565 nvme0n1 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:40.565 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.131 nvme0n1 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:41.131 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.132 23:51:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.698 nvme0n1 00:25:41.698 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:41.699 23:51:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 nvme0n1 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:42.265 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.523 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:42.524 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.090 nvme0n1 00:25:43.090 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.091 23:51:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.091 nvme0n1 00:25:43.091 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.091 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.091 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.091 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.091 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 nvme0n1 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.349 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.350 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.608 nvme0n1 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.608 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.867 nvme0n1 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:43.867 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.125 nvme0n1 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.125 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.126 23:51:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.383 nvme0n1 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:44.383 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.384 nvme0n1 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.384 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.641 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.642 nvme0n1 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.642 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.899 nvme0n1 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.899 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.157 23:51:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.157 nvme0n1 00:25:45.157 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.157 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.158 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.415 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.416 nvme0n1 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.416 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.674 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.932 nvme0n1 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.932 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:45.933 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.191 nvme0n1 00:25:46.191 23:51:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.191 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.450 nvme0n1 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.450 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.708 nvme0n1 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:46.708 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:46.709 23:51:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.276 nvme0n1 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.276 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.534 nvme0n1 00:25:47.534 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.534 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.534 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.534 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.534 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:47.793 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.051 nvme0n1 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.051 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.052 23:51:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.052 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.617 nvme0n1 00:25:48.617 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.617 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.617 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.617 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.617 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.618 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.876 nvme0n1 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:48.876 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:49.134 23:51:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.700 nvme0n1 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.700 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:49.701 23:51:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.267 nvme0n1 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.267 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.268 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.834 nvme0n1 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:50.834 23:51:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.403 nvme0n1 00:25:51.403 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:51.403 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.403 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.403 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:51.403 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.403 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.661 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:51.662 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.229 nvme0n1 00:25:52.229 23:51:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.229 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.229 23:51:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.229 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.230 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.488 nvme0n1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.488 nvme0n1 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.488 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.748 nvme0n1 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.748 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:52.749 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.007 nvme0n1 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.007 23:51:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.264 nvme0n1 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.264 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.265 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.522 nvme0n1 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.522 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.779 nvme0n1 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:53.779 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.035 nvme0n1 00:25:54.035 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.035 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.035 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.035 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.035 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.035 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.035 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.036 23:51:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.036 nvme0n1 00:25:54.036 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.036 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.036 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.292 nvme0n1 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.292 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.293 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.293 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.293 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.293 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.549 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.804 nvme0n1 00:25:54.804 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.804 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.804 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.804 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.804 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:54.805 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.061 nvme0n1 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.061 23:51:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.318 nvme0n1 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.318 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.574 nvme0n1 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.574 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.830 nvme0n1 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:55.830 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.086 23:51:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.342 nvme0n1 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.342 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.904 nvme0n1 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:56.904 23:51:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.161 nvme0n1 00:25:57.161 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.161 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.161 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.161 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.161 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.161 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.418 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.675 nvme0n1 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.675 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:57.676 23:51:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.241 nvme0n1 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmY3NjZkZWI4MTIwMTNmZjI0YjZlY2JkY2IzMDM5YjLvX0zL: 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODRjNGNlYmE0ZDMxYTkyMDBjZGNlNTE2YzkwZWI1ZDExOTRlN2VjZmEwYWI3MTk0NDYzYmMzYmJhYjkwNzljZWJkHVs=: 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.241 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.804 nvme0n1 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.804 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:58.805 23:51:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.369 nvme0n1 00:25:59.369 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:59.369 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.369 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.369 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:59.369 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.369 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:59.369 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzhkMTVjMWEwZTkyNDAxZjgyMDVjYmQ0MmQ0MzlkMzJCtuAm: 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: ]] 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWQ4ZWJlMmY3ZjgyNWZjYTE1MDI1Y2ZjY2ZlNDkwMjHI6uCy: 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:59.370 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:25:59.628 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.193 nvme0n1 00:26:00.193 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.193 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.193 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTZhYjI2ZGEzYWU5NTQ3OWM0MzZhMzg1NzUyOGVjMGFiNzMzM2QyMjE2YzNiMDliVZf+Fg==: 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: ]] 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWY1OWMwYzM5MGYzZmMzMjYzZTAzMDg3ZjdiOWI5NDOkXHfg: 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.194 23:51:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.194 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.758 nvme0n1 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.758 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzZjODc3YjlmZDBjYzNiOWRiZGRlYjE5ZmNjMmUyNDIwOTRiMDA1OTY0ZDNhMWRmZjlkMzRkMTBhMDdkNzgyN1yF8Ws=: 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:00.759 23:51:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.324 nvme0n1 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGFmYzFkOTZmOTU5ODY0NDFhYzk2MjFkNTg4MzI4MTcyMGFkOGM0N2ZkNDExOWY3sekuPA==: 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: ]] 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTY2NDM4OGZiM2RjNWZjMjFmMTliZGYzYzg5YTRhYjg1YzA5YWJlNTJiZTIyOWMxj0470g==: 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.324 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.583 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.583 request: 00:26:01.583 { 00:26:01.583 "name": "nvme0", 00:26:01.583 "trtype": "tcp", 00:26:01.583 "traddr": "10.0.0.1", 00:26:01.583 "adrfam": "ipv4", 00:26:01.583 "trsvcid": "4420", 00:26:01.583 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.583 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.583 "prchk_reftag": false, 00:26:01.583 "prchk_guard": false, 00:26:01.583 "hdgst": false, 00:26:01.583 "ddgst": false, 00:26:01.584 "method": "bdev_nvme_attach_controller", 00:26:01.584 "req_id": 1 00:26:01.584 } 00:26:01.584 Got JSON-RPC error response 00:26:01.584 response: 00:26:01.584 { 00:26:01.584 "code": -5, 00:26:01.584 "message": "Input/output error" 00:26:01.584 } 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.584 request: 00:26:01.584 { 00:26:01.584 "name": "nvme0", 00:26:01.584 "trtype": "tcp", 00:26:01.584 "traddr": "10.0.0.1", 00:26:01.584 "adrfam": "ipv4", 00:26:01.584 "trsvcid": "4420", 00:26:01.584 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.584 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.584 "prchk_reftag": false, 00:26:01.584 "prchk_guard": false, 00:26:01.584 "hdgst": false, 00:26:01.584 "ddgst": false, 00:26:01.584 "dhchap_key": "key2", 00:26:01.584 "method": "bdev_nvme_attach_controller", 00:26:01.584 "req_id": 1 00:26:01.584 } 00:26:01.584 Got JSON-RPC error response 00:26:01.584 response: 00:26:01.584 { 00:26:01.584 "code": -5, 00:26:01.584 "message": "Input/output error" 00:26:01.584 } 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:01.584 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.841 request: 00:26:01.841 { 00:26:01.841 "name": "nvme0", 00:26:01.841 "trtype": "tcp", 00:26:01.841 "traddr": "10.0.0.1", 00:26:01.841 "adrfam": "ipv4", 00:26:01.841 "trsvcid": "4420", 00:26:01.841 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.841 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.841 "prchk_reftag": false, 00:26:01.841 "prchk_guard": false, 00:26:01.841 "hdgst": false, 00:26:01.841 "ddgst": false, 00:26:01.841 "dhchap_key": "key1", 00:26:01.841 "dhchap_ctrlr_key": "ckey2", 00:26:01.841 "method": "bdev_nvme_attach_controller", 00:26:01.841 "req_id": 1 00:26:01.841 } 00:26:01.841 Got JSON-RPC error response 00:26:01.841 response: 00:26:01.841 { 00:26:01.841 "code": -5, 00:26:01.841 "message": "Input/output error" 00:26:01.841 } 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:01.841 rmmod nvme_tcp 00:26:01.841 rmmod nvme_fabrics 00:26:01.841 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1135843 ']' 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1135843 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@942 -- # '[' -z 1135843 ']' 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # kill -0 1135843 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # uname 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1135843 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1135843' 00:26:01.842 killing process with pid 1135843 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@961 -- # kill 1135843 00:26:01.842 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # wait 1135843 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:02.100 23:51:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:03.999 23:51:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:06.551 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:06.551 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:06.842 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:07.409 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:07.668 23:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.GL0 /tmp/spdk.key-null.jWO /tmp/spdk.key-sha256.sEn /tmp/spdk.key-sha384.POm /tmp/spdk.key-sha512.sYU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:07.668 23:51:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:10.201 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:10.201 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:10.201 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:10.201 00:26:10.201 real 0m48.080s 00:26:10.201 user 0m42.596s 00:26:10.201 sys 0m11.233s 00:26:10.201 23:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:10.201 23:51:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.201 ************************************ 00:26:10.201 END TEST nvmf_auth_host 00:26:10.201 ************************************ 00:26:10.201 23:51:58 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:26:10.201 23:51:58 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:26:10.201 23:51:58 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:10.201 23:51:58 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:26:10.201 23:51:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:10.201 23:51:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.201 ************************************ 00:26:10.201 START TEST nvmf_digest 00:26:10.201 ************************************ 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:10.201 * Looking for test storage... 00:26:10.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.201 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:10.202 23:51:58 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:15.471 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:15.472 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:15.472 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:15.472 Found net devices under 0000:86:00.0: cvl_0_0 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:15.472 Found net devices under 0000:86:00.1: cvl_0_1 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:15.472 23:52:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:15.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:26:15.472 00:26:15.472 --- 10.0.0.2 ping statistics --- 00:26:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.472 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:15.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:26:15.472 00:26:15.472 --- 10.0.0.1 ping statistics --- 00:26:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.472 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:15.472 ************************************ 00:26:15.472 START TEST nvmf_digest_clean 00:26:15.472 ************************************ 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1117 -- # run_digest 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1148730 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1148730 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1148730 ']' 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:15.472 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.473 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:15.473 23:52:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:15.473 [2024-07-15 23:52:04.293146] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:15.473 [2024-07-15 23:52:04.293190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.473 [2024-07-15 23:52:04.349153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.473 [2024-07-15 23:52:04.427746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.473 [2024-07-15 23:52:04.427779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.473 [2024-07-15 23:52:04.427785] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.473 [2024-07-15 23:52:04.427791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.473 [2024-07-15 23:52:04.427796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.473 [2024-07-15 23:52:04.427815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.411 null0 00:26:16.411 [2024-07-15 23:52:05.211885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.411 [2024-07-15 23:52:05.236058] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1148974 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1148974 /var/tmp/bperf.sock 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1148974 ']' 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:16.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:16.411 23:52:05 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:16.411 [2024-07-15 23:52:05.282751] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:16.411 [2024-07-15 23:52:05.282792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148974 ] 00:26:16.411 [2024-07-15 23:52:05.335694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.670 [2024-07-15 23:52:05.408513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.239 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:17.239 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:26:17.239 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:17.239 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:17.239 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:17.497 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.497 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.756 nvme0n1 00:26:17.756 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:17.756 23:52:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.015 Running I/O for 2 seconds... 00:26:19.922 00:26:19.922 Latency(us) 00:26:19.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:19.922 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:19.922 nvme0n1 : 2.00 27313.97 106.70 0.00 0.00 4680.72 2222.53 10143.83 00:26:19.922 =================================================================================================================== 00:26:19.922 Total : 27313.97 106.70 0.00 0.00 4680.72 2222.53 10143.83 00:26:19.922 0 00:26:19.922 23:52:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:19.922 23:52:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:19.922 23:52:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:19.922 23:52:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:19.922 | select(.opcode=="crc32c") 00:26:19.922 | "\(.module_name) \(.executed)"' 00:26:19.922 23:52:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1148974 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1148974 ']' 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1148974 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1148974 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1148974' 00:26:20.181 killing process with pid 1148974 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1148974 00:26:20.181 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.181 00:26:20.181 Latency(us) 00:26:20.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.181 =================================================================================================================== 00:26:20.181 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.181 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1148974 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1149502 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1149502 /var/tmp/bperf.sock 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1149502 ']' 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:20.441 23:52:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.441 [2024-07-15 23:52:09.271755] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:20.441 [2024-07-15 23:52:09.271805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149502 ] 00:26:20.441 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:20.441 Zero copy mechanism will not be used. 00:26:20.441 [2024-07-15 23:52:09.326108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.441 [2024-07-15 23:52:09.405536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.380 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:21.380 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:26:21.380 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:21.380 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:21.380 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.380 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.380 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.639 nvme0n1 00:26:21.898 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:21.898 23:52:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:21.898 Zero copy mechanism will not be used. 00:26:21.898 Running I/O for 2 seconds... 00:26:23.804 00:26:23.804 Latency(us) 00:26:23.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:23.804 nvme0n1 : 2.00 4420.60 552.58 0.00 0.00 3616.85 719.47 9402.99 00:26:23.804 =================================================================================================================== 00:26:23.804 Total : 4420.60 552.58 0.00 0.00 3616.85 719.47 9402.99 00:26:23.804 0 00:26:23.804 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:23.804 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:23.804 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:23.804 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:23.804 | select(.opcode=="crc32c") 00:26:23.804 | "\(.module_name) \(.executed)"' 00:26:23.804 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1149502 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1149502 ']' 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1149502 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1149502 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1149502' 00:26:24.061 killing process with pid 1149502 00:26:24.061 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1149502 00:26:24.062 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.062 00:26:24.062 Latency(us) 00:26:24.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.062 =================================================================================================================== 00:26:24.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.062 23:52:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1149502 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1150156 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1150156 /var/tmp/bperf.sock 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1150156 ']' 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:24.320 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.320 [2024-07-15 23:52:13.173307] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:24.320 [2024-07-15 23:52:13.173354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150156 ] 00:26:24.320 [2024-07-15 23:52:13.227417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.579 [2024-07-15 23:52:13.307583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.145 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:25.145 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:26:25.145 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:25.145 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:25.145 23:52:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:25.405 23:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.405 23:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.666 nvme0n1 00:26:25.666 23:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:25.666 23:52:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.666 Running I/O for 2 seconds... 00:26:28.237 00:26:28.237 Latency(us) 00:26:28.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.237 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:28.237 nvme0n1 : 2.00 27000.67 105.47 0.00 0.00 4732.21 4473.54 12081.42 00:26:28.237 =================================================================================================================== 00:26:28.237 Total : 27000.67 105.47 0.00 0.00 4732.21 4473.54 12081.42 00:26:28.237 0 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:28.237 | select(.opcode=="crc32c") 00:26:28.237 | "\(.module_name) \(.executed)"' 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1150156 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1150156 ']' 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1150156 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1150156 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1150156' 00:26:28.237 killing process with pid 1150156 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1150156 00:26:28.237 Received shutdown signal, test time was about 2.000000 seconds 00:26:28.237 00:26:28.237 Latency(us) 00:26:28.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.237 =================================================================================================================== 00:26:28.237 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:28.237 23:52:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1150156 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1150854 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1150854 /var/tmp/bperf.sock 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@823 -- # '[' -z 1150854 ']' 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:28.237 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:28.237 [2024-07-15 23:52:17.070276] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:28.238 [2024-07-15 23:52:17.070322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150854 ] 00:26:28.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:28.238 Zero copy mechanism will not be used. 00:26:28.238 [2024-07-15 23:52:17.125175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.238 [2024-07-15 23:52:17.204686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.176 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:29.176 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # return 0 00:26:29.176 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:29.176 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:29.176 23:52:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:29.176 23:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.176 23:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.744 nvme0n1 00:26:29.744 23:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:29.744 23:52:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.744 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:29.744 Zero copy mechanism will not be used. 00:26:29.744 Running I/O for 2 seconds... 00:26:31.644 00:26:31.644 Latency(us) 00:26:31.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.644 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:31.644 nvme0n1 : 2.00 5480.26 685.03 0.00 0.00 2914.78 2094.30 10371.78 00:26:31.644 =================================================================================================================== 00:26:31.644 Total : 5480.26 685.03 0.00 0.00 2914.78 2094.30 10371.78 00:26:31.644 0 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:31.903 | select(.opcode=="crc32c") 00:26:31.903 | "\(.module_name) \(.executed)"' 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1150854 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1150854 ']' 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1150854 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1150854 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1150854' 00:26:31.903 killing process with pid 1150854 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1150854 00:26:31.903 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.903 00:26:31.903 Latency(us) 00:26:31.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.903 =================================================================================================================== 00:26:31.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.903 23:52:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1150854 00:26:32.165 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1148730 00:26:32.165 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@942 -- # '[' -z 1148730 ']' 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # kill -0 1148730 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # uname 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1148730 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1148730' 00:26:32.166 killing process with pid 1148730 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # kill 1148730 00:26:32.166 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # wait 1148730 00:26:32.423 00:26:32.423 real 0m17.033s 00:26:32.423 user 0m32.841s 00:26:32.423 sys 0m4.305s 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:32.423 ************************************ 00:26:32.423 END TEST nvmf_digest_clean 00:26:32.423 ************************************ 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:32.423 ************************************ 00:26:32.423 START TEST nvmf_digest_error 00:26:32.423 ************************************ 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1117 -- # run_digest_error 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1151576 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1151576 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1151576 ']' 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.423 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:32.424 23:52:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.424 [2024-07-15 23:52:21.378860] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:32.424 [2024-07-15 23:52:21.378899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.681 [2024-07-15 23:52:21.434404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.681 [2024-07-15 23:52:21.513119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.681 [2024-07-15 23:52:21.513151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.681 [2024-07-15 23:52:21.513158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.681 [2024-07-15 23:52:21.513164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.681 [2024-07-15 23:52:21.513170] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.681 [2024-07-15 23:52:21.513186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:33.245 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.502 [2024-07-15 23:52:22.219251] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.502 null0 00:26:33.502 [2024-07-15 23:52:22.309536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.502 [2024-07-15 23:52:22.333707] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1151814 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1151814 /var/tmp/bperf.sock 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1151814 ']' 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:33.502 23:52:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:33.502 [2024-07-15 23:52:22.385838] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:33.502 [2024-07-15 23:52:22.385878] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1151814 ] 00:26:33.502 [2024-07-15 23:52:22.440446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.760 [2024-07-15 23:52:22.521031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.326 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:34.326 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:26:34.326 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.326 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.583 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:34.583 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:34.583 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.583 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:34.583 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.583 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.839 nvme0n1 00:26:34.839 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:34.839 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:34.839 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.839 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:34.839 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:34.839 23:52:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.097 Running I/O for 2 seconds... 00:26:35.097 [2024-07-15 23:52:23.898610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.097 [2024-07-15 23:52:23.898641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.097 [2024-07-15 23:52:23.898652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.097 [2024-07-15 23:52:23.909062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.097 [2024-07-15 23:52:23.909086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.097 [2024-07-15 23:52:23.909095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.097 [2024-07-15 23:52:23.917283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.917306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.917315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.928082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.928104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.928113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.936032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.936054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.936063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.946812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.946833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.946841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.955836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.955857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.955865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.965632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.965654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.965662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.975278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.975298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.975305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.983604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.983625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.983633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:23.993702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:23.993722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:23.993731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.002330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.002350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.002358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.011771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.011791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.011799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.021676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.021696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.021704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.031826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.031851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.031860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.039688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.039709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.039717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.050231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.050253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.050261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.060473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.060493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.060501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.098 [2024-07-15 23:52:24.068790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.098 [2024-07-15 23:52:24.068811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.098 [2024-07-15 23:52:24.068820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.079660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.079680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.079688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.088424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.088445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.088453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.098127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.098150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.098158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.106608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.106630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.106638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.116192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.116213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.116221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.125654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.125674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.125682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.134759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.134780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.134789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.143948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.143968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.143976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.154161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.154181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.154189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.162691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.162711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.162719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.172917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.172937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.172945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.183175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.183194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.183203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.191937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.191957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.191968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.201665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.201685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.201693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.210764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.210784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.210792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.220497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.220517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.220525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.229519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.229540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.229548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.239066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.239087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.239094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.248260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.248281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.248289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.257977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.257997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.258005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.266214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.266241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.266249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.276906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.276930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.276938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.286206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.286230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.286239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.295059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.295080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.295088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.304143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.304163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.304171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.314384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.314404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.314412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-07-15 23:52:24.322873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.357 [2024-07-15 23:52:24.322894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-07-15 23:52:24.322902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.332737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.332757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.332766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.343333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.343355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.343363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.351960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.351981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.351989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.362516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.362537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.362545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.372296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.372317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.372325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.381533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.381554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.381562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.389983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.390003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.390011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.399838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.616 [2024-07-15 23:52:24.399858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.616 [2024-07-15 23:52:24.399866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.616 [2024-07-15 23:52:24.409147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.409167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.409175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.418932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.418953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.418961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.427480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.427500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.427508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.437819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.437839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.437850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.447079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.447100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.447108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.456025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.456044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.456053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.465736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.465757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.465765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.475119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.475140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.475148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.485134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.485154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.485163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.494591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.494610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.494618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.502498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.502517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.502524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.512659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.512679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.512687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.522472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.522492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.522500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.531236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.531256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.531265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.541373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.541393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.541401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.550908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.550928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.550936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.559616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.559635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.559644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.568492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.568513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.568521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.578372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.578392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.578400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-07-15 23:52:24.588261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.617 [2024-07-15 23:52:24.588282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-07-15 23:52:24.588291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.598588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.598608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.598619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.606413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.606433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.606441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.616495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.616514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.616523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.625709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.625730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.625737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.635890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.635910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.635919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.644550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.644570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.644578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.654576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.654597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.654605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.664775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.664795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.664803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.673065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.673086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.673094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.684588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.684611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.684619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.692603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.692624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.692632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.701630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.701650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.701658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.712098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.712118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.712126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.721140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.721161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.721169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.730765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.730786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.730794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.739641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.739661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.739669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.749778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.749798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.749806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.758760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.758780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.758788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.768366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.876 [2024-07-15 23:52:24.768387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.876 [2024-07-15 23:52:24.768395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.876 [2024-07-15 23:52:24.777599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.777619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.777627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.877 [2024-07-15 23:52:24.787165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.787186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.787195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.877 [2024-07-15 23:52:24.796725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.796747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.796755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.877 [2024-07-15 23:52:24.806320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.806343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.877 [2024-07-15 23:52:24.814272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.814301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.814310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.877 [2024-07-15 23:52:24.824416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.824441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.824451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.877 [2024-07-15 23:52:24.834051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.834072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.834080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.877 [2024-07-15 23:52:24.842934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:35.877 [2024-07-15 23:52:24.842955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.877 [2024-07-15 23:52:24.842967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.853451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.853472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.853481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.863285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.863314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.872197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.872218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.872233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.881098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.881118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.881126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.890848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.890869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.890877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.900362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.900383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.900391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.909852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.909873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.909882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.918330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.918351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.918359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.928268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.928290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.928298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.938093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.938114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.938122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.947805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.947826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.947834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.957279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.957300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.957308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.965095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.965116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.965124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.976037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.976058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.976066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.984733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.984753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.984762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:24.995248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:24.995269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:24.995276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.004070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.004090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.004101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.014156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.014177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.014185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.022598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.022619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.022628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.032328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.032349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.032358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.042466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.042498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.042506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.050791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.050812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.050820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.061519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.061540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.061549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.070016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.070036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.070045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.080354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.080375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.080383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.088944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.088966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.088975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.136 [2024-07-15 23:52:25.098690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.136 [2024-07-15 23:52:25.098710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.136 [2024-07-15 23:52:25.098718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.108675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.108696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.108704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.117374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.117397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.117405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.127309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.127331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.127340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.136949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.136970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.136978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.145150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.145171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.145178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.155327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.155348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.155356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.165586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.165606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.165614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.175316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.175337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.175345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.184277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.184297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.184305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.193767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.193787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.193795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.203740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.203761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.203769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.211605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.211625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.211633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.221970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.221991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.221999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.231578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.231598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.231606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.240038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.240058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.240066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.249807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.249827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.249838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.260070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.260090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.260098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.268367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.268387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.268395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.279407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.279427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.279435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.287214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.287239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.287247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.296826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.296846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.296854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.306793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.306813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.306821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.315656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.315677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.315685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.326003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.326024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.326032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.334239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.334259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.334267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.396 [2024-07-15 23:52:25.344342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.396 [2024-07-15 23:52:25.344363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.396 [2024-07-15 23:52:25.344372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.397 [2024-07-15 23:52:25.354628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.397 [2024-07-15 23:52:25.354649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.397 [2024-07-15 23:52:25.354657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.397 [2024-07-15 23:52:25.362703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.397 [2024-07-15 23:52:25.362724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.397 [2024-07-15 23:52:25.362733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.655 [2024-07-15 23:52:25.372670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.655 [2024-07-15 23:52:25.372691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.655 [2024-07-15 23:52:25.372700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.655 [2024-07-15 23:52:25.382531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.655 [2024-07-15 23:52:25.382551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.655 [2024-07-15 23:52:25.382559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.391090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.391110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.391118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.401453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.401473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.401481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.410201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.410222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.410238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.418926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.418946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.418954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.428572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.428591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.428599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.438971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.438991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.438999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.447400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.447420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.447428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.456764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.456783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.456792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.466825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.466845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.466853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.476353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.476373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.476381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.484867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.484887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.484895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.494607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.494631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.494639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.503980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.504000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.504008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.513189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.513209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.513216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.521884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.521904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.521911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.531004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.531024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.531032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.541430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.541451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.541459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.550703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.550724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.550731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.559533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.559554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.559561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.568967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.568988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.568996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.579102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.579122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.579130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.587244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.587263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.587271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.597072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.597092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.597100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.607029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.607048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.607057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.615216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.615240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.615249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.656 [2024-07-15 23:52:25.625585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.656 [2024-07-15 23:52:25.625606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.656 [2024-07-15 23:52:25.625614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.634627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.634647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.634654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.644721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.644741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.644749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.655373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.655392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.655403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.664076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.664096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.664104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.675297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.675317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.675325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.683271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.683291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.683298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.693067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.693088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.693096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.702640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.702660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.702668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.711986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.712007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.712015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.720821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.720840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.720848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.730374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.730395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.730403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.739866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.739890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.739898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.748781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.748801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.748809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.759654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.759675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.759683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.767928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.767948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.767956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.777904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.777924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.777932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.787199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.787219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.787233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.796002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.796022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.796030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.806285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.806306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.806314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.815202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.815222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.815235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.825053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.825073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.825081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.834562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.916 [2024-07-15 23:52:25.834582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.916 [2024-07-15 23:52:25.834590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.916 [2024-07-15 23:52:25.843314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.917 [2024-07-15 23:52:25.843335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.917 [2024-07-15 23:52:25.843343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.917 [2024-07-15 23:52:25.853356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.917 [2024-07-15 23:52:25.853376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.917 [2024-07-15 23:52:25.853384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.917 [2024-07-15 23:52:25.862452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.917 [2024-07-15 23:52:25.862473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.917 [2024-07-15 23:52:25.862481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.917 [2024-07-15 23:52:25.871388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.917 [2024-07-15 23:52:25.871409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.917 [2024-07-15 23:52:25.871418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.917 [2024-07-15 23:52:25.880737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:36.917 [2024-07-15 23:52:25.880757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.917 [2024-07-15 23:52:25.880766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.175 [2024-07-15 23:52:25.889408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14e0f20) 00:26:37.175 [2024-07-15 23:52:25.889429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.175 [2024-07-15 23:52:25.889438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.175 00:26:37.175 Latency(us) 00:26:37.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.175 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:37.175 nvme0n1 : 2.00 26991.42 105.44 0.00 0.00 4735.96 2578.70 12366.36 00:26:37.175 =================================================================================================================== 00:26:37.175 Total : 26991.42 105.44 0.00 0.00 4735.96 2578.70 12366.36 00:26:37.175 0 00:26:37.175 23:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:37.175 23:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:37.175 23:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:37.175 | .driver_specific 00:26:37.175 | .nvme_error 00:26:37.175 | .status_code 00:26:37.175 | .command_transient_transport_error' 00:26:37.175 23:52:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 212 > 0 )) 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1151814 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1151814 ']' 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1151814 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1151814 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1151814' 00:26:37.175 killing process with pid 1151814 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1151814 00:26:37.175 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.175 00:26:37.175 Latency(us) 00:26:37.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.175 =================================================================================================================== 00:26:37.175 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.175 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1151814 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1152344 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1152344 /var/tmp/bperf.sock 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1152344 ']' 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:37.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:37.434 23:52:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:37.434 [2024-07-15 23:52:26.356312] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:37.434 [2024-07-15 23:52:26.356364] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152344 ] 00:26:37.434 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:37.434 Zero copy mechanism will not be used. 00:26:37.711 [2024-07-15 23:52:26.410442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.712 [2024-07-15 23:52:26.490033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.276 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:38.276 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:26:38.276 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.276 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.533 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:38.533 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:38.533 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.533 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:38.533 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.533 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.792 nvme0n1 00:26:38.792 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:38.792 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:38.792 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.792 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:38.792 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:38.792 23:52:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:38.792 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:38.792 Zero copy mechanism will not be used. 00:26:38.792 Running I/O for 2 seconds... 00:26:38.792 [2024-07-15 23:52:27.714445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:38.792 [2024-07-15 23:52:27.714479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.792 [2024-07-15 23:52:27.714490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.792 [2024-07-15 23:52:27.724610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:38.792 [2024-07-15 23:52:27.724640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.792 [2024-07-15 23:52:27.724648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.792 [2024-07-15 23:52:27.733522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:38.792 [2024-07-15 23:52:27.733543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.792 [2024-07-15 23:52:27.733552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.792 [2024-07-15 23:52:27.741782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:38.792 [2024-07-15 23:52:27.741805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.792 [2024-07-15 23:52:27.741813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.792 [2024-07-15 23:52:27.750588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:38.792 [2024-07-15 23:52:27.750610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.792 [2024-07-15 23:52:27.750619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.792 [2024-07-15 23:52:27.759756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:38.792 [2024-07-15 23:52:27.759777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.792 [2024-07-15 23:52:27.759786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.768588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.768614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.768622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.776359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.776380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.776389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.783752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.783773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.783781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.790516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.790537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.790544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.797142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.797163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.797171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.803550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.803572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.803580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.810939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.810962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.810970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.817212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.817241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.817250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.823111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.823133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.823141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.829070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.829091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.829099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.835097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.835118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.835127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.840992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.841015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.841023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.846837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.846859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.846870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.852309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.852332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.852341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.858039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.858060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.858068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.863758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.863780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.863788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.869675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.869698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.051 [2024-07-15 23:52:27.869706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.051 [2024-07-15 23:52:27.875562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.051 [2024-07-15 23:52:27.875584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.875593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.881433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.881454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.881462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.887272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.887293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.887300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.893249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.893270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.893278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.899122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.899147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.899156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.904583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.904604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.904612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.910383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.910405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.910413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.916210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.916239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.916248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.921943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.921964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.921973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.927736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.927758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.927766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.933602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.933624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.933632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.939414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.939435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.939443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.945299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.945320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.945332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.951129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.951151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.951159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.956920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.956941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.956949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.962865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.962886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.962894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.968713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.968734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.968743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.974687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.974708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.974717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.980533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.980554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.980563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.986372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.052 [2024-07-15 23:52:27.986394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.052 [2024-07-15 23:52:27.986402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.052 [2024-07-15 23:52:27.992136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.053 [2024-07-15 23:52:27.992158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.053 [2024-07-15 23:52:27.992167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.053 [2024-07-15 23:52:27.997959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.053 [2024-07-15 23:52:27.997984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.053 [2024-07-15 23:52:27.997992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.053 [2024-07-15 23:52:28.003885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.053 [2024-07-15 23:52:28.003906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.053 [2024-07-15 23:52:28.003915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.053 [2024-07-15 23:52:28.009878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.053 [2024-07-15 23:52:28.009899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.053 [2024-07-15 23:52:28.009907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.053 [2024-07-15 23:52:28.015746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.053 [2024-07-15 23:52:28.015767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.053 [2024-07-15 23:52:28.015775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.053 [2024-07-15 23:52:28.021728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.053 [2024-07-15 23:52:28.021750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.053 [2024-07-15 23:52:28.021759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.027737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.027759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.027768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.033694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.033716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.033724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.039563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.039584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.039593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.045365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.045388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.045397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.051302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.051324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.051333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.057198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.057221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.057235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.063011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.063033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.063041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.068877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.068898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.068906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.074737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.074759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.074766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.080540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.080562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.080570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.086321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.086342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.086350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.092216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.092243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.092251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.098081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.098103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.098114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.103834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.103855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.103863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.109647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.109669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.109677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.115570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.115591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.115599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.121359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.121380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.121388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.127077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.127099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.127107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.132918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.132940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.132948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.138812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.138833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.138841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.144620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.144642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.144650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.150394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.150419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.150428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.156262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.156283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.156291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.312 [2024-07-15 23:52:28.162031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.312 [2024-07-15 23:52:28.162054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.312 [2024-07-15 23:52:28.162062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.167826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.167847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.167855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.173704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.173725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.173733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.179693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.179718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.179728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.185470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.185493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.185502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.191257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.191279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.191288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.197122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.197145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.197154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.202953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.202976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.202984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.208746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.208768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.208776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.214530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.214551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.214559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.220238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.220259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.220267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.226018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.226039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.226047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.231878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.231899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.231907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.237679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.237701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.237709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.243445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.243466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.243474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.249218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.249245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.249256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.255006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.255028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.255036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.260892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.260913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.260921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.266622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.266644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.266652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.272426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.272447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.272456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.313 [2024-07-15 23:52:28.278213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.313 [2024-07-15 23:52:28.278240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.313 [2024-07-15 23:52:28.278249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.285124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.285147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.285155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.291029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.291051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.291059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.296798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.296819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.296827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.302644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.302665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.302673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.308549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.308568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.308576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.314400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.314422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.314430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.320221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.320248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.320257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.326143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.326165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.326174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.330094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.330115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.330124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.334742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.334764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.334772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.572 [2024-07-15 23:52:28.340585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.572 [2024-07-15 23:52:28.340606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.572 [2024-07-15 23:52:28.340614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.346744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.346766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.346778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.353853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.353880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.353888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.360502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.360524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.360532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.367442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.367465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.367473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.373732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.373753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.373762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.380056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.380078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.380086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.386332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.386353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.386361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.392410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.392432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.392440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.398419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.398440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.398448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.404390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.404414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.404422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.410312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.410333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.410341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.416414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.416436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.416444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.422429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.422456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.422465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.427980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.428001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.428009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.433824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.433845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.433854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.439550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.439571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.439579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.445359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.445380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.445389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.451133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.451154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.451162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.456948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.456970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.456978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.462812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.462834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.462842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.468199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.468220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.468234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.472998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.473020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.473028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.478772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.478794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.573 [2024-07-15 23:52:28.478802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.573 [2024-07-15 23:52:28.484544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.573 [2024-07-15 23:52:28.484565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.484574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.490366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.490389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.490398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.496183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.496204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.496212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.501993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.502013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.502025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.507921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.507942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.507951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.513866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.513888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.513896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.519634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.519656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.519664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.525391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.525412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.525420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.531298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.531320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.531328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.537128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.537150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.537158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.574 [2024-07-15 23:52:28.542924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.574 [2024-07-15 23:52:28.542946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.574 [2024-07-15 23:52:28.542954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.548860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.548882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.548890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.554777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.554802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.554810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.560552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.560573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.560582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.566267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.566288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.566296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.571960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.571982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.571990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.577767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.577787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.577795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.583531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.583552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.583561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.589396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.589425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.595259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.595280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.595288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.601153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.601174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.601183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.605084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.605105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.605113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.609641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.609663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.609671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.615729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.833 [2024-07-15 23:52:28.615750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.833 [2024-07-15 23:52:28.615758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.833 [2024-07-15 23:52:28.621651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.621672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.621681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.629016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.629039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.629047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.638275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.638298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.638306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.648820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.648843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.648851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.658993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.659015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.659023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.668118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.668139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.668150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.677471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.677493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.677501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.686139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.686161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.686169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.695091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.695113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.695122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.706811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.706833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.706841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.717026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.717048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.717056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.725860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.725882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.725890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.733491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.733513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.733522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.741102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.741124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.741132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.749768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.749791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.749799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.759563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.759585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.759593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.768502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.768524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.768533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.776729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.776751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.776759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.783546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.783568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.783576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.790248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.790269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.790278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.797065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.797086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.797094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.834 [2024-07-15 23:52:28.803877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:39.834 [2024-07-15 23:52:28.803899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.834 [2024-07-15 23:52:28.803908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.092 [2024-07-15 23:52:28.809847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.809869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.809881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.815911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.815933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.815941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.822069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.822090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.822098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.828759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.828780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.828788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.835154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.835176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.835185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.841189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.841213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.841221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.847356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.847377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.847385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.853583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.853606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.853614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.861524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.861546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.861554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.870798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.870825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.870833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.881876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.881898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.881906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.891798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.891819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.891827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.901485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.901514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.901522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.910244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.910267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.910275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.919453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.919475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.919483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.926497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.926520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.926528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.933610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.933632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.933640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.940793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.940814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.940823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.947581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.947604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.947612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.958587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.958609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.958617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.968418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.968440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.968448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.977706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.977727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.977735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.985634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.985656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.985663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:28.993589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:28.993619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:28.993627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:29.002323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:29.002346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:29.002355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:29.010883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:29.010906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:29.010915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:29.020856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:29.020878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:29.020890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:29.032097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:29.032120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.093 [2024-07-15 23:52:29.032128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.093 [2024-07-15 23:52:29.042406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.093 [2024-07-15 23:52:29.042427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.094 [2024-07-15 23:52:29.042435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.094 [2024-07-15 23:52:29.052880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.094 [2024-07-15 23:52:29.052903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.094 [2024-07-15 23:52:29.052911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.094 [2024-07-15 23:52:29.062114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.094 [2024-07-15 23:52:29.062138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.094 [2024-07-15 23:52:29.062146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.071595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.071617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.071625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.080454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.080479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.080487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.089648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.089674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.089683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.099245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.099269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.099278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.108578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.108604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.108612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.117317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.117342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.117351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.126260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.126284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.126296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.135206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.135235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.135244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.143798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.143820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.143829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.152124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.152147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.152155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.160889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.353 [2024-07-15 23:52:29.160911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.353 [2024-07-15 23:52:29.160919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.353 [2024-07-15 23:52:29.169832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.169855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.169864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.179130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.179153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.179166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.187919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.187943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.187951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.196462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.196486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.196496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.205876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.205900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.205908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.214930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.214953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.214962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.224927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.224950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.224959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.234086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.234109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.234118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.243318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.243340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.243349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.251069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.251092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.251101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.259190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.259218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.259232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.267039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.267061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.267069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.274389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.274411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.274419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.281702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.281724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.281733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.289186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.289209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.289217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.296828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.296850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.296858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.304421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.304445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.304453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.311370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.311391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.311400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.318106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.318128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.318136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.354 [2024-07-15 23:52:29.324698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.354 [2024-07-15 23:52:29.324720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.354 [2024-07-15 23:52:29.324729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.331115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.331138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.331146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.337375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.337397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.337406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.343536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.343558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.343566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.349634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.349655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.349662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.355823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.355845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.355853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.361814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.361835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.361843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.367685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.367707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.367715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.373542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.373564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.373575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.378551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.378573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.378581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.384642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.384663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.384671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.390604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.390625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.390634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.396576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.396597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.396604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.402487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.402508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.402516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.408464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.408486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.408494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.414348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.414369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.414377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.420231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.420252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.420260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.426109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.426134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.426142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.431911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.431933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.431941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.437944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.437965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.437973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.620 [2024-07-15 23:52:29.443840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.620 [2024-07-15 23:52:29.443861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.620 [2024-07-15 23:52:29.443871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.449655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.449676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.449684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.455523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.455545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.455553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.461406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.461427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.461435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.467149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.467170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.467178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.472898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.472920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.472928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.478638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.478659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.478667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.484445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.484466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.484474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.490383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.490405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.490414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.496619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.496641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.496650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.502848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.502870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.502878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.509324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.509344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.509352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.515571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.515593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.515600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.526318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.526339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.526347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.536388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.536409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.536421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.545769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.545791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.545799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.555275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.555297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.555306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.566273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.566295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.566303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.576538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.576559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.576567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.621 [2024-07-15 23:52:29.586828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.621 [2024-07-15 23:52:29.586851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.621 [2024-07-15 23:52:29.586860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.596278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.596299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.596308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.606007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.606030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.606039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.615241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.615263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.615271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.623366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.623387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.623396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.630947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.630969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.630978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.638291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.638314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.638323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.644984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.645006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.645014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.651577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.651598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.651607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.658770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.658792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.658801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.665983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.666006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.666015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.670213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.670240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.670249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.676894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.676916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.676928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.684547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.684569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.684577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.693442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.693464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.693472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:40.935 [2024-07-15 23:52:29.704576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f590b0) 00:26:40.935 [2024-07-15 23:52:29.704598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.935 [2024-07-15 23:52:29.704607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:40.935 00:26:40.935 Latency(us) 00:26:40.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.935 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:40.935 nvme0n1 : 2.00 4481.22 560.15 0.00 0.00 3567.70 648.24 11397.57 00:26:40.935 =================================================================================================================== 00:26:40.935 Total : 4481.22 560.15 0.00 0.00 3567.70 648.24 11397.57 00:26:40.935 0 00:26:40.935 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:40.935 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:40.935 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:40.935 | .driver_specific 00:26:40.935 | .nvme_error 00:26:40.935 | .status_code 00:26:40.935 | .command_transient_transport_error' 00:26:40.935 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:40.935 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 289 > 0 )) 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1152344 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1152344 ']' 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1152344 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1152344 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1152344' 00:26:41.194 killing process with pid 1152344 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1152344 00:26:41.194 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.194 00:26:41.194 Latency(us) 00:26:41.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.194 =================================================================================================================== 00:26:41.194 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.194 23:52:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1152344 00:26:41.194 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:41.194 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:41.194 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:41.194 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:41.194 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:41.194 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1152997 00:26:41.194 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1152997 /var/tmp/bperf.sock 00:26:41.195 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:41.195 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1152997 ']' 00:26:41.195 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.195 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:41.195 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.195 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:41.195 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:41.454 [2024-07-15 23:52:30.190163] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:41.454 [2024-07-15 23:52:30.190212] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152997 ] 00:26:41.454 [2024-07-15 23:52:30.244375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.454 [2024-07-15 23:52:30.321557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.391 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:42.391 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:26:42.391 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.391 23:52:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.391 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:42.391 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:42.391 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.391 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:42.391 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.391 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.649 nvme0n1 00:26:42.649 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:42.649 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:42.649 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.649 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:42.649 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:42.649 23:52:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.650 Running I/O for 2 seconds... 00:26:42.650 [2024-07-15 23:52:31.619791] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.650 [2024-07-15 23:52:31.620004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.650 [2024-07-15 23:52:31.620033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.629623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.629825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.629848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.639248] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.639448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.639468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.648805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.649000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.649019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.658422] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.658620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.658640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.667957] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.668154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.668172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.677453] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.677648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.677670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.686945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.687140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.687159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.909 [2024-07-15 23:52:31.696473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.909 [2024-07-15 23:52:31.696666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.909 [2024-07-15 23:52:31.696684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.705945] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.706156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.715505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.715697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.715715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.725103] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.725297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.725314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.734505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.734704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.734724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.744055] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.744248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.744267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.753590] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.753790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.753809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.763061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.763254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.763274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.772578] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.772773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.772790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.782096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.782294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.782311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.791603] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.791800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.791820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.801077] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.801276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.801293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.810549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.810745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.820037] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.820236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.820253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.829496] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.829689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.829707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.838952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.839145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.839163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.848473] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.848674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.848691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.857950] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.858143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.858160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.867434] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.867627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.867644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.910 [2024-07-15 23:52:31.877040] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:42.910 [2024-07-15 23:52:31.877240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.910 [2024-07-15 23:52:31.877258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.170 [2024-07-15 23:52:31.886814] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.170 [2024-07-15 23:52:31.887015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.170 [2024-07-15 23:52:31.887033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.170 [2024-07-15 23:52:31.896580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.170 [2024-07-15 23:52:31.896781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.170 [2024-07-15 23:52:31.896799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.170 [2024-07-15 23:52:31.906217] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.170 [2024-07-15 23:52:31.906419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.170 [2024-07-15 23:52:31.906437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.170 [2024-07-15 23:52:31.915708] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.170 [2024-07-15 23:52:31.915900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.170 [2024-07-15 23:52:31.915918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.170 [2024-07-15 23:52:31.925305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.170 [2024-07-15 23:52:31.925501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.170 [2024-07-15 23:52:31.925521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.170 [2024-07-15 23:52:31.934773] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.170 [2024-07-15 23:52:31.934967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.170 [2024-07-15 23:52:31.934985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.170 [2024-07-15 23:52:31.944274] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.170 [2024-07-15 23:52:31.944469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:31.944487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:31.953845] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:31.954039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:31.954056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:31.963326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:31.963521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:31.963543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:31.972826] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:31.973025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:31.973045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:31.982580] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:31.982775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:31.982792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:31.992050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:31.992252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:31.992270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.001535] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.001729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.001747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.010997] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.011195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.011217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.020683] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.020879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.020897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.030150] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.030355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.030372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.039642] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.039837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.039854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.049105] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.049301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.049318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.058658] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.058851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.058868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.068100] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.068294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.068310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.077595] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.077855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.077874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.087058] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.087261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.087278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.096542] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.096739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.096761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.106001] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.106194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.106212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.115471] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.115665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.115682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.124951] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.125143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.125160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.171 [2024-07-15 23:52:32.134456] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.171 [2024-07-15 23:52:32.134652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.171 [2024-07-15 23:52:32.134678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.144147] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.144373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.153853] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.154046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.154065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.163404] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.163612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.163630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.172888] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.173086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.173113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.182374] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.182574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.182592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.191876] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.192068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.192085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.201380] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.201574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.201592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.210859] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.211054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.211073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.220340] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.220535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.220552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.229846] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.230042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.230060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.239326] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.239521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.239539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.248795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.248989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.249007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.258377] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.258573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.258592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.267889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.268084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.268102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.277397] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.277597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.277616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.286884] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.287075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.287092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.296369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.296564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.431 [2024-07-15 23:52:32.296582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.431 [2024-07-15 23:52:32.305871] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.431 [2024-07-15 23:52:32.306067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.306084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.315353] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.315548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.315567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.324827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.325024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.325043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.334330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.334525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.334543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.343833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.344025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.344046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.353359] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.353554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.353572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.362890] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.363084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.363101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.372425] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.372621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.372639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.381940] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.382132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.382150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.391379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.391572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.391590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.432 [2024-07-15 23:52:32.401075] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.432 [2024-07-15 23:52:32.401274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.432 [2024-07-15 23:52:32.401292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.691 [2024-07-15 23:52:32.410922] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.691 [2024-07-15 23:52:32.411120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.691 [2024-07-15 23:52:32.411139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.691 [2024-07-15 23:52:32.420418] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.691 [2024-07-15 23:52:32.420615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.691 [2024-07-15 23:52:32.420634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.691 [2024-07-15 23:52:32.429981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.691 [2024-07-15 23:52:32.430177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.691 [2024-07-15 23:52:32.430195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.691 [2024-07-15 23:52:32.439462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.439657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.439676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.448920] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.449114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.449131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.458402] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.458594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.458611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.467873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.468067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.468085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.477366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.477560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.477577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.486898] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.487092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.487112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.496352] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.496544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.496563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.505820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.506013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.506030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.515281] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.515478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.515495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.524818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.525010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.525027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.534309] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.534504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.534521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.543726] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.543922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.543940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.553289] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.553482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.553499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.562768] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.562963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.562980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.572246] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.572440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.572456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.581742] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.581937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.581954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.591244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.591440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.591458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.600704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.600898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.600915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.610193] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.610394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.610412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.619663] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.619856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.619873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.629106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.629303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.629320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.638625] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.638818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.638838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.648019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.648213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.648237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.692 [2024-07-15 23:52:32.657669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.692 [2024-07-15 23:52:32.657863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.692 [2024-07-15 23:52:32.657882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.667395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.667593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.667612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.676998] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.677192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.677214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.686587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.686784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.686802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.696045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.696239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.696257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.705525] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.705720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.705736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.715017] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.715213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.715235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.724502] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.724697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.724716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.733981] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.734175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.734192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.743529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.743723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.743740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.753052] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.753247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.753264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.762552] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.762747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.762763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.772016] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.772210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.772231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.781483] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.781679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.781698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.791042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.791238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.791256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.800725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.800926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.800945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.810241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.810438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.810458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.819723] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.819914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.819931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.829166] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.829370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.952 [2024-07-15 23:52:32.829389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.952 [2024-07-15 23:52:32.838679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.952 [2024-07-15 23:52:32.838873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.838891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.848099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.848293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.848310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.857654] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.857847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.857865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.867123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.867325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.867342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.876619] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.876810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.876827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.886099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.886298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.886316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.895652] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.895846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.895863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.905081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.905277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.905294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.914744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.914943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.914961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:43.953 [2024-07-15 23:52:32.924356] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:43.953 [2024-07-15 23:52:32.924552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.953 [2024-07-15 23:52:32.924571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.212 [2024-07-15 23:52:32.934072] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.212 [2024-07-15 23:52:32.934265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.212 [2024-07-15 23:52:32.934283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.212 [2024-07-15 23:52:32.943570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.212 [2024-07-15 23:52:32.943761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.212 [2024-07-15 23:52:32.943778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.212 [2024-07-15 23:52:32.953085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.212 [2024-07-15 23:52:32.953276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.212 [2024-07-15 23:52:32.953293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.212 [2024-07-15 23:52:32.962666] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.212 [2024-07-15 23:52:32.962860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.212 [2024-07-15 23:52:32.962876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.212 [2024-07-15 23:52:32.972028] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.212 [2024-07-15 23:52:32.972220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.212 [2024-07-15 23:52:32.972241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.212 [2024-07-15 23:52:32.981489] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.212 [2024-07-15 23:52:32.981684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.212 [2024-07-15 23:52:32.981701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:32.990983] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:32.991177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:32.991194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.000490] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.000686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.000703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.009941] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.010135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.010157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.019605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.019804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.019823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.029106] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.029299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.029316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.038549] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.038746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.038771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.048051] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.048247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.048264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.057581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.057777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.057795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.067042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.067237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.067254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.076509] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.076703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.076720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.085995] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.086189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.086207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.095495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.095697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.095715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.104986] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.105180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.105199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.114462] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.114656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.114674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.123968] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.124161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.124178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.133444] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.133639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.133657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.142935] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.143130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.143155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.152450] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.152656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.152675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.161985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.162178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.162196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.171684] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.171878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.171897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.213 [2024-07-15 23:52:33.181212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.213 [2024-07-15 23:52:33.181422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.213 [2024-07-15 23:52:33.181441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.191013] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.191210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.191233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.200570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.200771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.200787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.210039] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.210239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.210257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.219543] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.219738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.219756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.229006] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.229203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.229221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.238460] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.238654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.238672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.247971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.248168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.248187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.257547] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.257745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.257764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.267019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.267218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.267239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.276495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.276693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.276713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.286026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.286216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.286238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.295566] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.295763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.295781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.305079] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.305284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.305302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.314572] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.314766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.314793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.324093] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.324290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.324308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.333570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.333762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.333788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.343043] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.343237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.343258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.352587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.352789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.352808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.362120] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.362326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.362345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.371615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.371808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.371825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.381168] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.381377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.381396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.390665] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.390861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.390879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.400170] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.400379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.483 [2024-07-15 23:52:33.400398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.483 [2024-07-15 23:52:33.409923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.483 [2024-07-15 23:52:33.410131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.484 [2024-07-15 23:52:33.410150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.484 [2024-07-15 23:52:33.419526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.484 [2024-07-15 23:52:33.419721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.484 [2024-07-15 23:52:33.419738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.484 [2024-07-15 23:52:33.429239] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.484 [2024-07-15 23:52:33.429445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.484 [2024-07-15 23:52:33.429466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.484 [2024-07-15 23:52:33.438704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.484 [2024-07-15 23:52:33.438901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.484 [2024-07-15 23:52:33.438918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.484 [2024-07-15 23:52:33.448384] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.484 [2024-07-15 23:52:33.448580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.484 [2024-07-15 23:52:33.448599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.458069] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.458290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.458308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.467674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.467872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.467890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.477214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.477415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.477432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.486702] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.486896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.486913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.496179] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.496384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.496404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.505704] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.505900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.505919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.515261] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.515458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.515475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.524725] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.524923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.524940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.534273] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.534471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.534488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.543763] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.543957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.543974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.553312] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.553512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.553530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.562847] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.563041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.563058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.572329] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.572524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.572549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.581852] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.582045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.582063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.591403] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.591599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.591618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.600873] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.601065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.601084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 [2024-07-15 23:52:33.610376] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde24d0) with pdu=0x2000190fdeb0 00:26:44.743 [2024-07-15 23:52:33.610572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.743 [2024-07-15 23:52:33.610597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.743 00:26:44.743 Latency(us) 00:26:44.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.743 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:44.743 nvme0n1 : 2.00 26815.94 104.75 0.00 0.00 4764.78 2008.82 9858.89 00:26:44.743 =================================================================================================================== 00:26:44.743 Total : 26815.94 104.75 0.00 0.00 4764.78 2008.82 9858.89 00:26:44.743 0 00:26:44.743 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:44.743 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:44.743 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:44.743 | .driver_specific 00:26:44.743 | .nvme_error 00:26:44.743 | .status_code 00:26:44.743 | .command_transient_transport_error' 00:26:44.743 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1152997 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1152997 ']' 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1152997 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1152997 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1152997' 00:26:45.018 killing process with pid 1152997 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1152997 00:26:45.018 Received shutdown signal, test time was about 2.000000 seconds 00:26:45.018 00:26:45.018 Latency(us) 00:26:45.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:45.018 =================================================================================================================== 00:26:45.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:45.018 23:52:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1152997 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1153690 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1153690 /var/tmp/bperf.sock 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@823 -- # '[' -z 1153690 ']' 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:45.276 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.276 [2024-07-15 23:52:34.099568] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:45.276 [2024-07-15 23:52:34.099613] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153690 ] 00:26:45.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:45.276 Zero copy mechanism will not be used. 00:26:45.276 [2024-07-15 23:52:34.152479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.276 [2024-07-15 23:52:34.220049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.214 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:46.214 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # return 0 00:26:46.214 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.214 23:52:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:46.214 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:46.214 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:46.214 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.214 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:46.214 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.214 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.472 nvme0n1 00:26:46.472 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:46.472 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:46.472 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.472 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:46.472 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:46.472 23:52:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:46.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:46.472 Zero copy mechanism will not be used. 00:26:46.472 Running I/O for 2 seconds... 00:26:46.732 [2024-07-15 23:52:35.449682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.450135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.450165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.460141] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.460567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.460592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.469253] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.469652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.469674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.476820] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.477201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.477221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.484513] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.484901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.484921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.491245] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.491597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.491618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.498597] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.498960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.498981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.504373] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.504749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.504770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.510099] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.510527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.510547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.515756] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.516119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.516138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.523424] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.523825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.523844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.529421] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.529832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.529852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.542546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.542955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.542987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.552530] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.552927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.552948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.560633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.561013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.561033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.568958] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.569343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.569364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.577026] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.577403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.577436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.585505] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.585862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.585882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.594140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.594513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.594534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.602923] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.603295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.603316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.610189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.610548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.610568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.618521] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.618884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.618904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.626303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.626679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.732 [2024-07-15 23:52:35.626699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.732 [2024-07-15 23:52:35.637828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.732 [2024-07-15 23:52:35.638353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.733 [2024-07-15 23:52:35.638373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.733 [2024-07-15 23:52:35.651323] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.733 [2024-07-15 23:52:35.651722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.733 [2024-07-15 23:52:35.651746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.733 [2024-07-15 23:52:35.661485] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.733 [2024-07-15 23:52:35.661882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.733 [2024-07-15 23:52:35.661902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.733 [2024-07-15 23:52:35.671010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.733 [2024-07-15 23:52:35.671443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.733 [2024-07-15 23:52:35.671463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.733 [2024-07-15 23:52:35.683799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.733 [2024-07-15 23:52:35.684198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.733 [2024-07-15 23:52:35.684218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.733 [2024-07-15 23:52:35.694050] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.733 [2024-07-15 23:52:35.694470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.733 [2024-07-15 23:52:35.694490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.733 [2024-07-15 23:52:35.704308] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.992 [2024-07-15 23:52:35.704707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.992 [2024-07-15 23:52:35.704728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.992 [2024-07-15 23:52:35.712420] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.992 [2024-07-15 23:52:35.712538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.992 [2024-07-15 23:52:35.712557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.992 [2024-07-15 23:52:35.721445] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.992 [2024-07-15 23:52:35.721829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.992 [2024-07-15 23:52:35.721849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.729675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.730051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.730071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.738637] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.739030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.739050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.747463] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.747841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.747860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.757349] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.757860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.757879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.770911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.771321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.771340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.781803] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.782210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.782235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.795466] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.795871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.795891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.805209] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.805610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.805630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.812265] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.812400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.812418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.824833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.825218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.825241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.834307] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.834732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.834752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.842053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.842445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.842465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.849388] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.849813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.849832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.856750] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.857126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.857146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.862936] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.863325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.863344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.876409] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.876609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.876635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.887089] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.887485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.887504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.893689] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.894048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.894068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.905215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.905777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.905801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.913931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.914400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.914420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.920358] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.920722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.920741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.926570] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.926912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.926931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.932685] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.933060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.933080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.938889] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.939254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.939274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.993 [2024-07-15 23:52:35.946608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.993 [2024-07-15 23:52:35.946941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.993 [2024-07-15 23:52:35.946961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.994 [2024-07-15 23:52:35.952346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.994 [2024-07-15 23:52:35.952686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.994 [2024-07-15 23:52:35.952706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.994 [2024-07-15 23:52:35.957931] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.994 [2024-07-15 23:52:35.958276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.994 [2024-07-15 23:52:35.958295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.994 [2024-07-15 23:52:35.962670] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:46.994 [2024-07-15 23:52:35.963011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.994 [2024-07-15 23:52:35.963031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.253 [2024-07-15 23:52:35.968283] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.253 [2024-07-15 23:52:35.968637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:35.968656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:35.973916] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:35.974270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:35.974290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:35.979214] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:35.979561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:35.979581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:35.983883] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:35.984208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:35.984232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:35.988818] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:35.989161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:35.989181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:35.994078] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:35.994437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:35.994457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:35.999550] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:35.999900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:35.999919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.005616] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.005963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.005983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.010636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.010978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.010998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.015458] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.015773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.015793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.021162] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.021488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.021507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.026045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.026364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.026384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.031080] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.031405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.031424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.036433] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.036749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.041135] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.041452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.041472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.046615] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.046927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.046947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.051260] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.051582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.051605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.056255] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.056589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.056609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.061355] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.061722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.061742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.066911] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.067238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.067259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.071906] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.072246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.072266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.076752] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.077086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.077105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.081390] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.081758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.081778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.087638] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.088032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.088051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.095270] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.095716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.095735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.101553] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.101892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.101912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.107540] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.107849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.254 [2024-07-15 23:52:36.107868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.254 [2024-07-15 23:52:36.112344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.254 [2024-07-15 23:52:36.112676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.112695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.120674] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.121155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.121174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.129118] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.129493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.129513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.134982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.135314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.135334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.140266] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.140586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.140606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.146213] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.146524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.146543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.151442] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.151810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.151829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.157408] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.157744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.157763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.163604] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.163974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.163993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.173284] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.173907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.173926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.184585] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.185055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.185074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.192760] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.193132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.193151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.199467] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.199787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.199807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.206437] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.206821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.206840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.214126] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.214531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.214551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.255 [2024-07-15 23:52:36.221825] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.255 [2024-07-15 23:52:36.222281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.255 [2024-07-15 23:52:36.222303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.230085] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.230451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.230471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.237482] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.237935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.237954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.245479] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.245887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.245907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.253259] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.253690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.253709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.261446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.261862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.261882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.269157] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.269475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.269495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.277164] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.277603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.277622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.284956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.285396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.285416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.292641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.293073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.293093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.300771] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.301237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.301257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.308529] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.308916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.308936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.515 [2024-07-15 23:52:36.316009] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.515 [2024-07-15 23:52:36.316460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.515 [2024-07-15 23:52:36.316479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.322447] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.322763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.322782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.327661] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.328001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.328021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.332956] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.333284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.333304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.337643] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.337970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.337990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.342310] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.342647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.342667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.347010] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.347331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.347351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.351720] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.352042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.352061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.356321] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.356668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.356688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.361805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.362125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.362144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.366903] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.367241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.367260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.373688] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.374082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.374102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.380824] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.381253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.381273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.387799] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.388218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.388242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.395346] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.395792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.395815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.403137] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.403563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.403582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.411222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.411639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.411658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.419546] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.419970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.419990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.428608] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.428998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.429017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.436732] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.437164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.437184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.445507] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.445920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.445939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.453863] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.454231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.454249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.462221] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.462673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.462692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.469978] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.470369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.470389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.478212] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.478687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.478707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.516 [2024-07-15 23:52:36.487204] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.516 [2024-07-15 23:52:36.487638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.516 [2024-07-15 23:52:36.487658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.496090] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.496475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.496495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.504369] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.504822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.504840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.512322] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.512773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.512792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.520669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.521071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.521090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.528715] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.529127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.529147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.536366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.536790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.543932] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.544459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.544479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.551952] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.552452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.552471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.559042] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.559399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.559420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.565645] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.566122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.566142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.572795] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.573186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.573205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.578605] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.578988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.579007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.584886] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.585212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.585238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.591457] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.591783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.591802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.597854] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.598186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.598206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.603755] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.604066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.604086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.610396] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.610817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.610837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.618241] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.618590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.618610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.626367] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.626757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.626776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.634805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.635205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.635231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.643276] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.643692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.643713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.652029] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.652443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.777 [2024-07-15 23:52:36.652463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.777 [2024-07-15 23:52:36.659782] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.777 [2024-07-15 23:52:36.660229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.660250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.668127] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.668566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.668585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.676045] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.676434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.676454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.682053] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.682386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.682406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.688088] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.688448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.688467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.693623] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.693947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.693967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.699469] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.699836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.699855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.705589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.705928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.711019] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.711341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.711361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.716528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.716842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.716865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.722395] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.722694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.722713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.727679] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.728002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.728021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.733123] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.733440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.733460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.738526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.738835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.738855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:47.778 [2024-07-15 23:52:36.744070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:47.778 [2024-07-15 23:52:36.744395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.778 [2024-07-15 23:52:36.744415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.750005] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.750334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.750354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.755244] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.755563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.755584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.759915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.760231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.760250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.764587] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.764913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.764933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.769492] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.769842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.769862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.774526] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.774844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.774863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.780215] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.780610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.780629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.787139] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.787549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.038 [2024-07-15 23:52:36.787569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.038 [2024-07-15 23:52:36.793716] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.038 [2024-07-15 23:52:36.794119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.794138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.801994] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.802382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.802402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.810067] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.810514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.810533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.818176] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.818592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.818612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.826200] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.826591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.826610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.834379] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.834736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.834755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.841801] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.842207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.842234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.850070] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.850453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.850473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.858794] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.859111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.859131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.866636] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.867040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.867059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.874969] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.875364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.875384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.883491] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.883870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.883889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.891038] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.891320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.891343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.898892] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.899266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.899286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.906210] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.906589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.906608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.914344] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.914731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.914750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.922724] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.923098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.923117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.930857] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.931270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.931290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.938816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.939201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.939221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.946581] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.946854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.946874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.954706] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.955079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.955098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.963282] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.963642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.963662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.971074] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.971471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.971491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.978985] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.979312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.979331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.986208] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.986500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.986520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.992713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.992996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.993015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:36.999044] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:36.999369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:36.999388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.039 [2024-07-15 23:52:37.006660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.039 [2024-07-15 23:52:37.007057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.039 [2024-07-15 23:52:37.007077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.013222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.013529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.013549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.019153] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.019433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.019453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.023870] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.024135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.024155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.028633] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.028907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.028927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.033143] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.033400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.033421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.038032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.038261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.038281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.042371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.042640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.042660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.048076] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.048450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.048481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.053772] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.054054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.054074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.059303] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.059633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.059653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.065148] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.065482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.065504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.071159] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.071462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.071482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.077640] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.077982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.078001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.084092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.084427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.084447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.090431] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.090651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.090671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.097094] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.097407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.097425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.103371] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.103654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.103673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.110081] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.301 [2024-07-15 23:52:37.110391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.301 [2024-07-15 23:52:37.110410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.301 [2024-07-15 23:52:37.116668] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.116996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.117015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.123104] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.123389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.123408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.129769] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.130053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.130072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.136593] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.136821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.136841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.142833] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.143057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.143076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.149330] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.149617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.149636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.155790] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.156052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.156071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.160589] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.160762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.160781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.164973] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.165183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.165201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.169816] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.170043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.170062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.174598] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.174779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.174797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.178528] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.178687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.178707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.182401] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.182651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.182670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.186305] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.186477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.186496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.190669] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.190859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.190877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.194836] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.195016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.195035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.198713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.198871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.198890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.202508] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.202665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.202683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.206527] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.206731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.206761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.210318] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.210473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.210492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.214119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.214286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.214304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.218061] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.218216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.302 [2024-07-15 23:52:37.218239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.302 [2024-07-15 23:52:37.221992] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.302 [2024-07-15 23:52:37.222194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.222213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.225982] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.226140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.226158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.229783] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.229967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.229985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.233682] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.233871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.233892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.237534] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.237697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.237717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.241389] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.241577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.241596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.245222] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.245396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.245413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.249036] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.249195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.249213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.253203] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.253369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.253387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.259032] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.259195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.259212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.263869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.264034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.264054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.303 [2024-07-15 23:52:37.268364] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.303 [2024-07-15 23:52:37.268527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.303 [2024-07-15 23:52:37.268545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.563 [2024-07-15 23:52:37.272946] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.563 [2024-07-15 23:52:37.273142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.563 [2024-07-15 23:52:37.273160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.563 [2024-07-15 23:52:37.277365] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.563 [2024-07-15 23:52:37.277576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.563 [2024-07-15 23:52:37.277596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.563 [2024-07-15 23:52:37.281744] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.563 [2024-07-15 23:52:37.281926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.563 [2024-07-15 23:52:37.281943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.563 [2024-07-15 23:52:37.286675] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.563 [2024-07-15 23:52:37.286831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.563 [2024-07-15 23:52:37.286849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.563 [2024-07-15 23:52:37.290869] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.563 [2024-07-15 23:52:37.291032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.563 [2024-07-15 23:52:37.291051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.563 [2024-07-15 23:52:37.295140] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.563 [2024-07-15 23:52:37.295309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.563 [2024-07-15 23:52:37.295328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.299092] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.299259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.299277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.303014] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.303169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.303186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.306915] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.307071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.307089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.311189] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.311400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.311419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.315119] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.315285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.315308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.318976] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.319159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.319176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.322858] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.323011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.323029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.326660] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.326849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.326868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.330537] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.330727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.330747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.334366] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.334516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.334535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.338163] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.338322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.338340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.341918] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.342125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.342145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.345828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.345988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.346006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.349610] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.349762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.349782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.353765] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.353993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.354012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.358828] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.359064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.359084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.364971] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.365212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.365239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.369495] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.369708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.373827] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.373991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.374009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.377664] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.377817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.377836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.381500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.381658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.381675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.385381] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.385582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.385601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.389805] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.390069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.390089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.393693] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.393849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.393867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.397500] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.397653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.397671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.401297] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.401451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.401468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.405096] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.405254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.405273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.408959] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.409168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.409186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.564 [2024-07-15 23:52:37.413446] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.564 [2024-07-15 23:52:37.413720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.564 [2024-07-15 23:52:37.413740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.565 [2024-07-15 23:52:37.419713] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.565 [2024-07-15 23:52:37.419921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.565 [2024-07-15 23:52:37.419940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.565 [2024-07-15 23:52:37.424641] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.565 [2024-07-15 23:52:37.424834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.565 [2024-07-15 23:52:37.424856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.565 [2024-07-15 23:52:37.429199] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.565 [2024-07-15 23:52:37.429357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.565 [2024-07-15 23:52:37.429378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.565 [2024-07-15 23:52:37.433842] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.565 [2024-07-15 23:52:37.434024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.565 [2024-07-15 23:52:37.434043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.565 [2024-07-15 23:52:37.438777] tcp.c:2081:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xde2810) with pdu=0x2000190fef90 00:26:48.565 [2024-07-15 23:52:37.438998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.565 [2024-07-15 23:52:37.439017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.565 00:26:48.565 Latency(us) 00:26:48.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.565 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:48.565 nvme0n1 : 2.00 4728.42 591.05 0.00 0.00 3379.14 1752.38 14930.81 00:26:48.565 =================================================================================================================== 00:26:48.565 Total : 4728.42 591.05 0.00 0.00 3379.14 1752.38 14930.81 00:26:48.565 0 00:26:48.565 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:48.565 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:48.565 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:48.565 | .driver_specific 00:26:48.565 | .nvme_error 00:26:48.565 | .status_code 00:26:48.565 | .command_transient_transport_error' 00:26:48.565 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 305 > 0 )) 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1153690 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1153690 ']' 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1153690 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1153690 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1153690' 00:26:48.824 killing process with pid 1153690 00:26:48.824 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1153690 00:26:48.824 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.824 00:26:48.825 Latency(us) 00:26:48.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.825 =================================================================================================================== 00:26:48.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.825 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1153690 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1151576 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@942 -- # '[' -z 1151576 ']' 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # kill -0 1151576 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # uname 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1151576 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1151576' 00:26:49.084 killing process with pid 1151576 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # kill 1151576 00:26:49.084 23:52:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # wait 1151576 00:26:49.344 00:26:49.344 real 0m16.745s 00:26:49.344 user 0m32.289s 00:26:49.344 sys 0m4.301s 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:49.344 ************************************ 00:26:49.344 END TEST nvmf_digest_error 00:26:49.344 ************************************ 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1136 -- # return 0 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:49.344 rmmod nvme_tcp 00:26:49.344 rmmod nvme_fabrics 00:26:49.344 rmmod nvme_keyring 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1151576 ']' 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1151576 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@942 -- # '[' -z 1151576 ']' 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # kill -0 1151576 00:26:49.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1151576) - No such process 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@969 -- # echo 'Process with pid 1151576 is not found' 00:26:49.344 Process with pid 1151576 is not found 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.344 23:52:38 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.250 23:52:40 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.250 00:26:51.250 real 0m41.421s 00:26:51.250 user 1m6.729s 00:26:51.250 sys 0m12.646s 00:26:51.250 23:52:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:26:51.250 23:52:40 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:51.250 ************************************ 00:26:51.250 END TEST nvmf_digest 00:26:51.250 ************************************ 00:26:51.509 23:52:40 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:26:51.509 23:52:40 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:51.509 23:52:40 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:51.509 23:52:40 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:51.509 23:52:40 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:51.509 23:52:40 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:26:51.509 23:52:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:26:51.509 23:52:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:51.510 ************************************ 00:26:51.510 START TEST nvmf_bdevperf 00:26:51.510 ************************************ 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:51.510 * Looking for test storage... 00:26:51.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.510 23:52:40 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:56.772 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:56.772 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:56.772 Found net devices under 0000:86:00.0: cvl_0_0 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:56.772 Found net devices under 0000:86:00.1: cvl_0_1 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.772 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:26:56.773 00:26:56.773 --- 10.0.0.2 ping statistics --- 00:26:56.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.773 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:26:56.773 00:26:56.773 --- 10.0.0.1 ping statistics --- 00:26:56.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.773 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1157696 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1157696 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1157696 ']' 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:26:56.773 23:52:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:56.773 [2024-07-15 23:52:45.722146] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:56.773 [2024-07-15 23:52:45.722187] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.032 [2024-07-15 23:52:45.779483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:57.032 [2024-07-15 23:52:45.859925] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.032 [2024-07-15 23:52:45.859959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.032 [2024-07-15 23:52:45.859966] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.032 [2024-07-15 23:52:45.859972] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.032 [2024-07-15 23:52:45.859978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.032 [2024-07-15 23:52:45.860076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.032 [2024-07-15 23:52:45.860136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.032 [2024-07-15 23:52:45.860137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.598 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:26:57.598 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:26:57.598 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.598 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:57.598 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.860 [2024-07-15 23:52:46.580043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.860 Malloc0 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.860 [2024-07-15 23:52:46.639265] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.860 { 00:26:57.860 "params": { 00:26:57.860 "name": "Nvme$subsystem", 00:26:57.860 "trtype": "$TEST_TRANSPORT", 00:26:57.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.860 "adrfam": "ipv4", 00:26:57.860 "trsvcid": "$NVMF_PORT", 00:26:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.860 "hdgst": ${hdgst:-false}, 00:26:57.860 "ddgst": ${ddgst:-false} 00:26:57.860 }, 00:26:57.860 "method": "bdev_nvme_attach_controller" 00:26:57.860 } 00:26:57.860 EOF 00:26:57.860 )") 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:57.860 23:52:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:57.860 "params": { 00:26:57.860 "name": "Nvme1", 00:26:57.860 "trtype": "tcp", 00:26:57.860 "traddr": "10.0.0.2", 00:26:57.860 "adrfam": "ipv4", 00:26:57.860 "trsvcid": "4420", 00:26:57.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.860 "hdgst": false, 00:26:57.860 "ddgst": false 00:26:57.860 }, 00:26:57.860 "method": "bdev_nvme_attach_controller" 00:26:57.860 }' 00:26:57.860 [2024-07-15 23:52:46.689283] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:57.860 [2024-07-15 23:52:46.689326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1157944 ] 00:26:57.860 [2024-07-15 23:52:46.743760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.860 [2024-07-15 23:52:46.818747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.160 Running I/O for 1 seconds... 00:26:59.537 00:26:59.537 Latency(us) 00:26:59.537 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.537 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:59.537 Verification LBA range: start 0x0 length 0x4000 00:26:59.538 Nvme1n1 : 1.00 11104.25 43.38 0.00 0.00 11485.44 2364.99 15158.76 00:26:59.538 =================================================================================================================== 00:26:59.538 Total : 11104.25 43.38 0.00 0.00 11485.44 2364.99 15158.76 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1158177 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.538 { 00:26:59.538 "params": { 00:26:59.538 "name": "Nvme$subsystem", 00:26:59.538 "trtype": "$TEST_TRANSPORT", 00:26:59.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.538 "adrfam": "ipv4", 00:26:59.538 "trsvcid": "$NVMF_PORT", 00:26:59.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.538 "hdgst": ${hdgst:-false}, 00:26:59.538 "ddgst": ${ddgst:-false} 00:26:59.538 }, 00:26:59.538 "method": "bdev_nvme_attach_controller" 00:26:59.538 } 00:26:59.538 EOF 00:26:59.538 )") 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:59.538 23:52:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.538 "params": { 00:26:59.538 "name": "Nvme1", 00:26:59.538 "trtype": "tcp", 00:26:59.538 "traddr": "10.0.0.2", 00:26:59.538 "adrfam": "ipv4", 00:26:59.538 "trsvcid": "4420", 00:26:59.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.538 "hdgst": false, 00:26:59.538 "ddgst": false 00:26:59.538 }, 00:26:59.538 "method": "bdev_nvme_attach_controller" 00:26:59.538 }' 00:26:59.538 [2024-07-15 23:52:48.335555] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:26:59.538 [2024-07-15 23:52:48.335606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1158177 ] 00:26:59.538 [2024-07-15 23:52:48.390270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.538 [2024-07-15 23:52:48.460004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.797 Running I/O for 15 seconds... 00:27:02.324 23:52:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1157696 00:27:02.324 23:52:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:02.586 [2024-07-15 23:52:51.305727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.305984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.305993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.586 [2024-07-15 23:52:51.306390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.586 [2024-07-15 23:52:51.306397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.306991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.306999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.307005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.307013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.307020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.307029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:100712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.307036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.307044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.307050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.587 [2024-07-15 23:52:51.307058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.587 [2024-07-15 23:52:51.307064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:100760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:100880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:100912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.588 [2024-07-15 23:52:51.307707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.588 [2024-07-15 23:52:51.307713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.589 [2024-07-15 23:52:51.307882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2735c70 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.307900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.589 [2024-07-15 23:52:51.307908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.589 [2024-07-15 23:52:51.307917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101168 len:8 PRP1 0x0 PRP2 0x0 00:27:02.589 [2024-07-15 23:52:51.307928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.589 [2024-07-15 23:52:51.307973] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2735c70 was disconnected and freed. reset controller. 00:27:02.589 [2024-07-15 23:52:51.310848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.310902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.311434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.311450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.589 [2024-07-15 23:52:51.311457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.311635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.311812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.589 [2024-07-15 23:52:51.311820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.589 [2024-07-15 23:52:51.311828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.589 [2024-07-15 23:52:51.314665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.589 [2024-07-15 23:52:51.324189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.324565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.324584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.589 [2024-07-15 23:52:51.324591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.324769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.324948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.589 [2024-07-15 23:52:51.324958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.589 [2024-07-15 23:52:51.324964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.589 [2024-07-15 23:52:51.327760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.589 [2024-07-15 23:52:51.337042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.337536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.337553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.589 [2024-07-15 23:52:51.337559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.337722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.337884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.589 [2024-07-15 23:52:51.337893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.589 [2024-07-15 23:52:51.337899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.589 [2024-07-15 23:52:51.340598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.589 [2024-07-15 23:52:51.349871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.350274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.350291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.589 [2024-07-15 23:52:51.350298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.350461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.350623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.589 [2024-07-15 23:52:51.350632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.589 [2024-07-15 23:52:51.350638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.589 [2024-07-15 23:52:51.353327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.589 [2024-07-15 23:52:51.362662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.363139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.363181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.589 [2024-07-15 23:52:51.363204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.363686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.363860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.589 [2024-07-15 23:52:51.363869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.589 [2024-07-15 23:52:51.363875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.589 [2024-07-15 23:52:51.366513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.589 [2024-07-15 23:52:51.375605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.376075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.376092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.589 [2024-07-15 23:52:51.376098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.376286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.376459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.589 [2024-07-15 23:52:51.376468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.589 [2024-07-15 23:52:51.376474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.589 [2024-07-15 23:52:51.379165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.589 [2024-07-15 23:52:51.388507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.388990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.389034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.589 [2024-07-15 23:52:51.389056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.589 [2024-07-15 23:52:51.389577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.589 [2024-07-15 23:52:51.389750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.589 [2024-07-15 23:52:51.389760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.589 [2024-07-15 23:52:51.389766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.589 [2024-07-15 23:52:51.393649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.589 [2024-07-15 23:52:51.402119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.589 [2024-07-15 23:52:51.402570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.589 [2024-07-15 23:52:51.402587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.402595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.402762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.402928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.402937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.402943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.405670] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.415043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.415533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.415576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.415598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.415976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.416139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.416148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.416157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.418921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.427909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.428326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.428368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.428390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.428906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.429071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.429080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.429086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.431772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.440740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.441222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.441288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.441310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.441828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.441991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.442001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.442007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.444689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.453657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.454132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.454173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.454195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.454771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.454945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.454955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.454961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.457593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.466552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.467039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.467087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.467109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.467661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.467834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.467844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.467851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.470571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.479489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.479861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.479878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.479885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.480056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.480243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.480269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.480276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.482945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.492365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.492855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.492897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.492918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.493273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.493447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.493456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.493462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.496120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.505314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.505697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.505713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.505720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.505882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.506048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.590 [2024-07-15 23:52:51.506058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.590 [2024-07-15 23:52:51.506063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.590 [2024-07-15 23:52:51.508753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.590 [2024-07-15 23:52:51.518191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.590 [2024-07-15 23:52:51.518685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.590 [2024-07-15 23:52:51.518702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.590 [2024-07-15 23:52:51.518709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.590 [2024-07-15 23:52:51.518881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.590 [2024-07-15 23:52:51.519053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.591 [2024-07-15 23:52:51.519062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.591 [2024-07-15 23:52:51.519069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.591 [2024-07-15 23:52:51.521755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.591 [2024-07-15 23:52:51.531020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.591 [2024-07-15 23:52:51.531497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.591 [2024-07-15 23:52:51.531540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.591 [2024-07-15 23:52:51.531562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.591 [2024-07-15 23:52:51.532091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.591 [2024-07-15 23:52:51.532260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.591 [2024-07-15 23:52:51.532269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.591 [2024-07-15 23:52:51.532293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.591 [2024-07-15 23:52:51.534959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.591 [2024-07-15 23:52:51.543931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.591 [2024-07-15 23:52:51.544397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.591 [2024-07-15 23:52:51.544442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.591 [2024-07-15 23:52:51.544464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.591 [2024-07-15 23:52:51.545042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.591 [2024-07-15 23:52:51.545415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.591 [2024-07-15 23:52:51.545426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.591 [2024-07-15 23:52:51.545432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.591 [2024-07-15 23:52:51.548095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.850 [2024-07-15 23:52:51.557001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.850 [2024-07-15 23:52:51.557433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.850 [2024-07-15 23:52:51.557453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.850 [2024-07-15 23:52:51.557461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.850 [2024-07-15 23:52:51.557639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.850 [2024-07-15 23:52:51.557825] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.850 [2024-07-15 23:52:51.557835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.850 [2024-07-15 23:52:51.557841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.850 [2024-07-15 23:52:51.560663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.850 [2024-07-15 23:52:51.570261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.850 [2024-07-15 23:52:51.570621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.850 [2024-07-15 23:52:51.570640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.850 [2024-07-15 23:52:51.570647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.850 [2024-07-15 23:52:51.570825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.850 [2024-07-15 23:52:51.571004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.850 [2024-07-15 23:52:51.571013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.850 [2024-07-15 23:52:51.571020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.850 [2024-07-15 23:52:51.573851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.850 [2024-07-15 23:52:51.583217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.850 [2024-07-15 23:52:51.583637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.850 [2024-07-15 23:52:51.583654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.583661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.583824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.583987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.583996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.584002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.586632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.596116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.596500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.596544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.596574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.597154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.597548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.597558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.597564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.600286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.609046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.609469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.609519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.609542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.610114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.610302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.610312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.610318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.612982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.621968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.622446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.622490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.622512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.623089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.623584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.623595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.623601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.626252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.634860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.635289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.635332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.635354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.635932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.636450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.636465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.636484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.639073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.647734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.648214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.648236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.648244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.648416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.648594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.648603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.648609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.651200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.660671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.661154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.661196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.661217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.661810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.662400] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.662426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.662446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.666096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.674504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.674901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.674918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.674925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.675093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.675283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.675293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.675299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.678000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.687305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.687777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.687793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.687800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.687962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.688126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.688135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.688140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.690828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.700100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.700488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.700504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.700512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.700674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.700837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.700846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.700852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.703542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.713019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.713494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.851 [2024-07-15 23:52:51.713511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.851 [2024-07-15 23:52:51.713519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.851 [2024-07-15 23:52:51.713681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.851 [2024-07-15 23:52:51.713843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.851 [2024-07-15 23:52:51.713852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.851 [2024-07-15 23:52:51.713859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.851 [2024-07-15 23:52:51.716544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.851 [2024-07-15 23:52:51.725934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.851 [2024-07-15 23:52:51.726400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.726417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.726424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.726589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.726752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.726762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.726768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.729456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.852 [2024-07-15 23:52:51.738722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.852 [2024-07-15 23:52:51.739173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.739189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.739196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.739384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.739556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.739566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.739572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.742274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.852 [2024-07-15 23:52:51.751532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.852 [2024-07-15 23:52:51.752012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.752054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.752076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.752589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.752763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.752773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.752780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.755420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.852 [2024-07-15 23:52:51.764482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.852 [2024-07-15 23:52:51.764936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.764978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.765001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.765534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.765708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.765717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.765727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.768423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.852 [2024-07-15 23:52:51.777342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.852 [2024-07-15 23:52:51.777790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.777807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.777814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.777977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.778140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.778148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.778154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.780907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.852 [2024-07-15 23:52:51.790292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.852 [2024-07-15 23:52:51.790752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.790769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.790777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.790950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.791124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.791134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.791140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.793761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.852 [2024-07-15 23:52:51.803196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.852 [2024-07-15 23:52:51.803667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.803710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.803732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.804336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.804592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.804604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.804614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.808668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.852 [2024-07-15 23:52:51.816847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.852 [2024-07-15 23:52:51.817313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.852 [2024-07-15 23:52:51.817331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:02.852 [2024-07-15 23:52:51.817339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:02.852 [2024-07-15 23:52:51.817526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:02.852 [2024-07-15 23:52:51.817699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.852 [2024-07-15 23:52:51.817709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.852 [2024-07-15 23:52:51.817715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.852 [2024-07-15 23:52:51.820634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.829960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.830431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.830480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.830503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.112 [2024-07-15 23:52:51.831018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.112 [2024-07-15 23:52:51.831182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.112 [2024-07-15 23:52:51.831191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.112 [2024-07-15 23:52:51.831198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.112 [2024-07-15 23:52:51.833890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.843007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.843482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.843499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.843507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.112 [2024-07-15 23:52:51.843668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.112 [2024-07-15 23:52:51.843831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.112 [2024-07-15 23:52:51.843840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.112 [2024-07-15 23:52:51.843846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.112 [2024-07-15 23:52:51.846536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.855871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.856339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.856356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.856363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.112 [2024-07-15 23:52:51.856526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.112 [2024-07-15 23:52:51.856693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.112 [2024-07-15 23:52:51.856702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.112 [2024-07-15 23:52:51.856708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.112 [2024-07-15 23:52:51.859397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.868666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.869067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.869104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.869127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.112 [2024-07-15 23:52:51.869656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.112 [2024-07-15 23:52:51.869829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.112 [2024-07-15 23:52:51.869839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.112 [2024-07-15 23:52:51.869845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.112 [2024-07-15 23:52:51.872482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.881693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.882028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.882045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.882053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.112 [2024-07-15 23:52:51.882215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.112 [2024-07-15 23:52:51.882383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.112 [2024-07-15 23:52:51.882392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.112 [2024-07-15 23:52:51.882398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.112 [2024-07-15 23:52:51.885083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.894665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.895117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.895134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.895142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.112 [2024-07-15 23:52:51.895327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.112 [2024-07-15 23:52:51.895501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.112 [2024-07-15 23:52:51.895510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.112 [2024-07-15 23:52:51.895517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.112 [2024-07-15 23:52:51.898176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.907591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.908063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.908106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.908129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.112 [2024-07-15 23:52:51.908719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.112 [2024-07-15 23:52:51.908917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.112 [2024-07-15 23:52:51.908926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.112 [2024-07-15 23:52:51.908932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.112 [2024-07-15 23:52:51.911563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.112 [2024-07-15 23:52:51.920630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.112 [2024-07-15 23:52:51.921041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-15 23:52:51.921078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.112 [2024-07-15 23:52:51.921101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:51.921693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:51.921914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:51.921924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:51.921931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:51.924606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:51.933690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:51.934109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:51.934127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:51.934134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:51.934316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:51.934493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:51.934503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:51.934510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:51.937357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:51.946749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:51.947197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:51.947260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:51.947284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:51.947862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:51.948344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:51.948354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:51.948360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:51.951063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:51.959703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:51.960086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:51.960103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:51.960110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:51.960287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:51.960460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:51.960469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:51.960475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:51.963078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:51.972577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:51.972921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:51.972937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:51.972944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:51.973127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:51.973307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:51.973316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:51.973323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:51.976049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:51.985761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:51.986189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:51.986206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:51.986215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:51.986397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:51.986579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:51.986588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:51.986595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:51.989422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:51.998945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:51.999408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:51.999426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:51.999433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:51.999611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:51.999790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:51.999799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:51.999806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:52.002635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:52.012311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:52.012817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:52.012861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:52.012883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:52.013421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:52.013604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:52.013612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:52.013619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:52.016508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:52.025395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:52.025817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:52.025859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:52.025880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:52.026330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:52.026508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:52.026517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:52.026523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:52.029356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:52.038529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:52.038944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:52.038960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:52.038967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:52.039138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:52.039315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:52.039323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:52.039329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:52.042132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:52.051427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.113 [2024-07-15 23:52:52.051776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-15 23:52:52.051791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.113 [2024-07-15 23:52:52.051798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.113 [2024-07-15 23:52:52.051969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.113 [2024-07-15 23:52:52.052141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.113 [2024-07-15 23:52:52.052148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.113 [2024-07-15 23:52:52.052155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.113 [2024-07-15 23:52:52.054835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.113 [2024-07-15 23:52:52.064289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.114 [2024-07-15 23:52:52.064620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-15 23:52:52.064636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.114 [2024-07-15 23:52:52.064643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.114 [2024-07-15 23:52:52.064814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.114 [2024-07-15 23:52:52.064985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.114 [2024-07-15 23:52:52.064992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.114 [2024-07-15 23:52:52.064999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.114 [2024-07-15 23:52:52.067796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.114 [2024-07-15 23:52:52.077213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.114 [2024-07-15 23:52:52.077608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-15 23:52:52.077651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.114 [2024-07-15 23:52:52.077680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.114 [2024-07-15 23:52:52.078236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.114 [2024-07-15 23:52:52.078408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.114 [2024-07-15 23:52:52.078416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.114 [2024-07-15 23:52:52.078422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.114 [2024-07-15 23:52:52.081205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.090394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.090808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.090825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.090833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.373 [2024-07-15 23:52:52.091015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.373 [2024-07-15 23:52:52.091187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.373 [2024-07-15 23:52:52.091195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.373 [2024-07-15 23:52:52.091201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.373 [2024-07-15 23:52:52.093926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.103340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.103737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.103753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.103760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.373 [2024-07-15 23:52:52.103922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.373 [2024-07-15 23:52:52.104084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.373 [2024-07-15 23:52:52.104091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.373 [2024-07-15 23:52:52.104097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.373 [2024-07-15 23:52:52.106736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.116350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.116766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.116781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.116788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.373 [2024-07-15 23:52:52.116949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.373 [2024-07-15 23:52:52.117111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.373 [2024-07-15 23:52:52.117121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.373 [2024-07-15 23:52:52.117127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.373 [2024-07-15 23:52:52.119768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.129315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.129770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.129812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.129833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.373 [2024-07-15 23:52:52.130425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.373 [2024-07-15 23:52:52.130640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.373 [2024-07-15 23:52:52.130648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.373 [2024-07-15 23:52:52.130654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.373 [2024-07-15 23:52:52.133359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.142270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.142591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.142607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.142614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.373 [2024-07-15 23:52:52.142775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.373 [2024-07-15 23:52:52.142937] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.373 [2024-07-15 23:52:52.142944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.373 [2024-07-15 23:52:52.142950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.373 [2024-07-15 23:52:52.145696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.155275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.155670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.155684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.155691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.373 [2024-07-15 23:52:52.155852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.373 [2024-07-15 23:52:52.156014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.373 [2024-07-15 23:52:52.156021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.373 [2024-07-15 23:52:52.156027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.373 [2024-07-15 23:52:52.158717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.168243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.168571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.168585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.168592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.373 [2024-07-15 23:52:52.168753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.373 [2024-07-15 23:52:52.168915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.373 [2024-07-15 23:52:52.168922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.373 [2024-07-15 23:52:52.168928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.373 [2024-07-15 23:52:52.171663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.373 [2024-07-15 23:52:52.181239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.373 [2024-07-15 23:52:52.181512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.373 [2024-07-15 23:52:52.181527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.373 [2024-07-15 23:52:52.181533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.181697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.181860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.181867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.181873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.184616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.194228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.194599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.194614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.194620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.194781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.194943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.194950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.194956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.197651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.207238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.207634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.207675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.207696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.208123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.208308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.208316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.208323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.211041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.220109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.220512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.220528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.220535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.220706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.220877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.220885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.220891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.223592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.233026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.233426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.233469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.233491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.233987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.234157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.234165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.234171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.236900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.245876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.246260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.246277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.246283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.246455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.246630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.246637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.246646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.249241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.258932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.259370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.259386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.259393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.259564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.259735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.259743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.259749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.262498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.271956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.272336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.272351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.272357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.272519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.272680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.272688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.272693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.275383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.284913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.285246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.285261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.285268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.285430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.285591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.285598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.285604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.288237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.374 [2024-07-15 23:52:52.297852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.374 [2024-07-15 23:52:52.298318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.374 [2024-07-15 23:52:52.298369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.374 [2024-07-15 23:52:52.298391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.374 [2024-07-15 23:52:52.298937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.374 [2024-07-15 23:52:52.299189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.374 [2024-07-15 23:52:52.299199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.374 [2024-07-15 23:52:52.299208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.374 [2024-07-15 23:52:52.303275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.375 [2024-07-15 23:52:52.311132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.375 [2024-07-15 23:52:52.311629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.375 [2024-07-15 23:52:52.311645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.375 [2024-07-15 23:52:52.311651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.375 [2024-07-15 23:52:52.311823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.375 [2024-07-15 23:52:52.311994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.375 [2024-07-15 23:52:52.312001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.375 [2024-07-15 23:52:52.312008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.375 [2024-07-15 23:52:52.314753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.375 [2024-07-15 23:52:52.324285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.375 [2024-07-15 23:52:52.324779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.375 [2024-07-15 23:52:52.324821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.375 [2024-07-15 23:52:52.324841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.375 [2024-07-15 23:52:52.325313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.375 [2024-07-15 23:52:52.325485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.375 [2024-07-15 23:52:52.325492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.375 [2024-07-15 23:52:52.325498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.375 [2024-07-15 23:52:52.328150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.375 [2024-07-15 23:52:52.337295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.375 [2024-07-15 23:52:52.337793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.375 [2024-07-15 23:52:52.337835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.375 [2024-07-15 23:52:52.337856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.375 [2024-07-15 23:52:52.338349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.375 [2024-07-15 23:52:52.338524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.375 [2024-07-15 23:52:52.338531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.375 [2024-07-15 23:52:52.338537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.375 [2024-07-15 23:52:52.341217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.634 [2024-07-15 23:52:52.350358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.634 [2024-07-15 23:52:52.350878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.634 [2024-07-15 23:52:52.350896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.350903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.351081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.351265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.351273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.351280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.353971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.363257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.363718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.363762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.363783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.364196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.364385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.364393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.364399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.367063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.376123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.376604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.376646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.376668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.377197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.377373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.377381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.377387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.380048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.389017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.389488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.389505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.389511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.389673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.389836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.389843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.389848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.392516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.401845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.402315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.402331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.402337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.402499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.402661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.402668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.402673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.405369] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.414739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.415189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.415243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.415266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.415843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.416330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.416338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.416344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.419041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.427726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.428184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.428237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.428267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.428845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.429372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.429380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.429386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.432036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.440637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.441114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.441130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.441136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.441313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.441485] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.441492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.441499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.444150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.453429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.453885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.453900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.453906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.454067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.454235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.454243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.454249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.456931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.466352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.466812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.466828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.466834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.466995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.467157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.467167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.467173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.469860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.479250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.479696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.479711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.635 [2024-07-15 23:52:52.479718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.635 [2024-07-15 23:52:52.479879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.635 [2024-07-15 23:52:52.480040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.635 [2024-07-15 23:52:52.480047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.635 [2024-07-15 23:52:52.480053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.635 [2024-07-15 23:52:52.482743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.635 [2024-07-15 23:52:52.492159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.635 [2024-07-15 23:52:52.492618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.635 [2024-07-15 23:52:52.492660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.492681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.493200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.493390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.493398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.493404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.496061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.505026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.505448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.505464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.505470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.505632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.505794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.505801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.505806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.508490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.517857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.518332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.518348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.518354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.518516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.518678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.518684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.518690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.521415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.530733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.531198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.531254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.531277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.531688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.531851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.531858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.531864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.534458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.543561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.544016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.544057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.544078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.544601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.544778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.544786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.544792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.547530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.556590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.557039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.557054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.557063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.557230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.557393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.557400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.557406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.560040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.569569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.570029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.570044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.570050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.570212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.570403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.570411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.570417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.573248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.582578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.583049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.583063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.583070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.583237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.583399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.583407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.583413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.586101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.636 [2024-07-15 23:52:52.595481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.636 [2024-07-15 23:52:52.595892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.636 [2024-07-15 23:52:52.595934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.636 [2024-07-15 23:52:52.595955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.636 [2024-07-15 23:52:52.596529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.636 [2024-07-15 23:52:52.596700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.636 [2024-07-15 23:52:52.596711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.636 [2024-07-15 23:52:52.596717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.636 [2024-07-15 23:52:52.599364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.608630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.609109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.609154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.609176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.609584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.609757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.609764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.609771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.612597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.621447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.621925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.621970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.621994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.622573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.622745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.622753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.622759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.625402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.634319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.634696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.634711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.634718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.634879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.635041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.635048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.635053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.637744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.647174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.647676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.647693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.647699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.647870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.648041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.648049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.648055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.650805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.660038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.660490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.660533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.660554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.660991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.661163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.661170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.661176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.663852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.673052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.673515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.673557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.673578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.674028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.674199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.674207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.674213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.676915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.685926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.686414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.686457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.686479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.687003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.687165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.687172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.687177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.689914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.698869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.699320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.699336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.699342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.699505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.699666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.699673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.699678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.702346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.711788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.712162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.712177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.712183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.712372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.712544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.712552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.712558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.715211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.724724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.725182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.725236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.725259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.725836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.897 [2024-07-15 23:52:52.726166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.897 [2024-07-15 23:52:52.726173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.897 [2024-07-15 23:52:52.726183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.897 [2024-07-15 23:52:52.728907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.897 [2024-07-15 23:52:52.737570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.897 [2024-07-15 23:52:52.738015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.897 [2024-07-15 23:52:52.738030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.897 [2024-07-15 23:52:52.738037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.897 [2024-07-15 23:52:52.738199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.738390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.738398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.738404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.741060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.750483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.750935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.750950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.750956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.751118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.751304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.751312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.751318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.754035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.763306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.763756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.763771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.763778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.763940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.764102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.764109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.764114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.766799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.776262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.776722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.776741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.776748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.776919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.777089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.777097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.777103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.779783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.789078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.789529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.789544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.789551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.789712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.789874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.789881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.789887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.792551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.801871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.802351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.802394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.802416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.802948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.803110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.803117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.803123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.805800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.814666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.815046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.815088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.815109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.815701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.816222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.816232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.816238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.818852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.827592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.828022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.828037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.828043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.828205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.828398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.828406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.828412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.831260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.840544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.841024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.841039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.841046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.841217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.841396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.841404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.841410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.844018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.853495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.853964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.853979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.853986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.854148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.854318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.854325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.854331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.898 [2024-07-15 23:52:52.856979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.898 [2024-07-15 23:52:52.866568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.898 [2024-07-15 23:52:52.867007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.898 [2024-07-15 23:52:52.867025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:03.898 [2024-07-15 23:52:52.867033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:03.898 [2024-07-15 23:52:52.867211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:03.898 [2024-07-15 23:52:52.867394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.898 [2024-07-15 23:52:52.867402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.898 [2024-07-15 23:52:52.867409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.158 [2024-07-15 23:52:52.870289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.158 [2024-07-15 23:52:52.879600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.158 [2024-07-15 23:52:52.880081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.158 [2024-07-15 23:52:52.880097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.158 [2024-07-15 23:52:52.880104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.158 [2024-07-15 23:52:52.880273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.158 [2024-07-15 23:52:52.880436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.158 [2024-07-15 23:52:52.880443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.158 [2024-07-15 23:52:52.880449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.158 [2024-07-15 23:52:52.883200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.158 [2024-07-15 23:52:52.892528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.158 [2024-07-15 23:52:52.893017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.158 [2024-07-15 23:52:52.893061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.158 [2024-07-15 23:52:52.893082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.158 [2024-07-15 23:52:52.893531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.158 [2024-07-15 23:52:52.893694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.158 [2024-07-15 23:52:52.893701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.158 [2024-07-15 23:52:52.893706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.158 [2024-07-15 23:52:52.896388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.158 [2024-07-15 23:52:52.905545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.158 [2024-07-15 23:52:52.906035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.158 [2024-07-15 23:52:52.906077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.158 [2024-07-15 23:52:52.906107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.158 [2024-07-15 23:52:52.906641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.158 [2024-07-15 23:52:52.906814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.158 [2024-07-15 23:52:52.906821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.158 [2024-07-15 23:52:52.906827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.158 [2024-07-15 23:52:52.909558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.158 [2024-07-15 23:52:52.918462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.158 [2024-07-15 23:52:52.918935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.158 [2024-07-15 23:52:52.918949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.158 [2024-07-15 23:52:52.918956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.158 [2024-07-15 23:52:52.919117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.158 [2024-07-15 23:52:52.919283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.158 [2024-07-15 23:52:52.919290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.158 [2024-07-15 23:52:52.919296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.158 [2024-07-15 23:52:52.921995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.158 [2024-07-15 23:52:52.931504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.158 [2024-07-15 23:52:52.931968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.158 [2024-07-15 23:52:52.932010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.158 [2024-07-15 23:52:52.932031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.158 [2024-07-15 23:52:52.932549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.158 [2024-07-15 23:52:52.932712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.158 [2024-07-15 23:52:52.932719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.158 [2024-07-15 23:52:52.932724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.158 [2024-07-15 23:52:52.935407] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.158 [2024-07-15 23:52:52.944527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.158 [2024-07-15 23:52:52.944974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.158 [2024-07-15 23:52:52.944989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.158 [2024-07-15 23:52:52.944996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.158 [2024-07-15 23:52:52.945157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.158 [2024-07-15 23:52:52.945324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.158 [2024-07-15 23:52:52.945335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.158 [2024-07-15 23:52:52.945340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.158 [2024-07-15 23:52:52.948046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.158 [2024-07-15 23:52:52.957353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.158 [2024-07-15 23:52:52.957815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.158 [2024-07-15 23:52:52.957857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.158 [2024-07-15 23:52:52.957878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:52.958469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:52.958910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:52.958918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:52.958924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:52.961647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:52.970153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:52.970615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:52.970658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:52.970680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:52.971271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:52.971499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:52.971506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:52.971512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:52.974222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:52.983175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:52.983626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:52.983641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:52.983648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:52.983810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:52.983973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:52.983980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:52.983986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:52.986680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:52.996107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:52.996560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:52.996576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:52.996582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:52.996744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:52.996906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:52.996913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:52.996919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:52.999610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.009027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.009491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.009533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:53.009555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:53.010132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:53.010733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:53.010740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:53.010747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:53.013547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.021905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.022368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.022413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:53.022436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:53.022663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:53.022835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:53.022842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:53.022849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:53.025541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.034809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.035270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.035313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:53.035334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:53.035862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:53.036024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:53.036031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:53.036037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:53.039918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.048273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.048742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.048785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:53.048807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:53.049400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:53.049906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:53.049913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:53.049919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:53.052609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.061171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.061635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.061678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:53.061699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:53.062292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:53.062871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:53.062878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:53.062885] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:53.065614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.074062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.074522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.074565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:53.074588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:53.075004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:53.075175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:53.075183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:53.075192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:53.077905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.087214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.087710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.087752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.159 [2024-07-15 23:52:53.087774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.159 [2024-07-15 23:52:53.088214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.159 [2024-07-15 23:52:53.088395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.159 [2024-07-15 23:52:53.088403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.159 [2024-07-15 23:52:53.088409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.159 [2024-07-15 23:52:53.091093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.159 [2024-07-15 23:52:53.100059] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.159 [2024-07-15 23:52:53.100551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.159 [2024-07-15 23:52:53.100566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.160 [2024-07-15 23:52:53.100573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.160 [2024-07-15 23:52:53.100734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.160 [2024-07-15 23:52:53.100897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.160 [2024-07-15 23:52:53.100903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.160 [2024-07-15 23:52:53.100909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.160 [2024-07-15 23:52:53.103499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.160 [2024-07-15 23:52:53.112930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.160 [2024-07-15 23:52:53.113378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.160 [2024-07-15 23:52:53.113394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.160 [2024-07-15 23:52:53.113400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.160 [2024-07-15 23:52:53.113577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.160 [2024-07-15 23:52:53.113739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.160 [2024-07-15 23:52:53.113746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.160 [2024-07-15 23:52:53.113752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.160 [2024-07-15 23:52:53.116441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.160 [2024-07-15 23:52:53.125846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.160 [2024-07-15 23:52:53.126314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.160 [2024-07-15 23:52:53.126335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.160 [2024-07-15 23:52:53.126343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.160 [2024-07-15 23:52:53.126522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.160 [2024-07-15 23:52:53.126698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.160 [2024-07-15 23:52:53.126706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.160 [2024-07-15 23:52:53.126712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.160 [2024-07-15 23:52:53.129582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.419 [2024-07-15 23:52:53.138873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.419 [2024-07-15 23:52:53.139310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.419 [2024-07-15 23:52:53.139328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.419 [2024-07-15 23:52:53.139335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.419 [2024-07-15 23:52:53.139506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.419 [2024-07-15 23:52:53.139679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.419 [2024-07-15 23:52:53.139686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.419 [2024-07-15 23:52:53.139692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.419 [2024-07-15 23:52:53.142449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.419 [2024-07-15 23:52:53.151725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.419 [2024-07-15 23:52:53.152183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.419 [2024-07-15 23:52:53.152239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.419 [2024-07-15 23:52:53.152262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.419 [2024-07-15 23:52:53.152840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.419 [2024-07-15 23:52:53.153218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.419 [2024-07-15 23:52:53.153230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.419 [2024-07-15 23:52:53.153236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.419 [2024-07-15 23:52:53.155851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.419 [2024-07-15 23:52:53.164655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.419 [2024-07-15 23:52:53.165097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.419 [2024-07-15 23:52:53.165112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.419 [2024-07-15 23:52:53.165119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.165311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.165483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.165491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.165497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.168151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.177536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.178001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.178044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.178064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.178654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.179183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.179190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.179196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.181911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.190368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.190819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.190835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.190841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.191002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.191164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.191171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.191176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.193915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.203281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.203735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.203750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.203756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.203918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.204079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.204086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.204095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.206777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.216101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.216596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.216613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.216620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.216790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.216961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.216969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.216974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.219658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.229014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.229470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.229512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.229533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.230118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.230303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.230311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.230317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.232982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.241949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.242314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.242329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.242335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.242496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.242658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.242665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.242671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.245361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.254881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.255333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.255352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.255359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.255520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.255683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.255690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.255696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.258364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.267757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.268217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.268272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.268293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.268871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.269391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.269398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.269405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.272062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.280590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.281056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.281098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.281119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.281615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.281792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.281799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.281806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.284487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.293428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.293879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.293894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.420 [2024-07-15 23:52:53.293901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.420 [2024-07-15 23:52:53.294071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.420 [2024-07-15 23:52:53.294254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.420 [2024-07-15 23:52:53.294263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.420 [2024-07-15 23:52:53.294269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.420 [2024-07-15 23:52:53.296885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.420 [2024-07-15 23:52:53.306345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.420 [2024-07-15 23:52:53.306828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.420 [2024-07-15 23:52:53.306843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.421 [2024-07-15 23:52:53.306850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.421 [2024-07-15 23:52:53.307011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.421 [2024-07-15 23:52:53.307173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.421 [2024-07-15 23:52:53.307180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.421 [2024-07-15 23:52:53.307186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.421 [2024-07-15 23:52:53.309926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.421 [2024-07-15 23:52:53.319353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.421 [2024-07-15 23:52:53.319796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.421 [2024-07-15 23:52:53.319812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.421 [2024-07-15 23:52:53.319819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.421 [2024-07-15 23:52:53.319981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.421 [2024-07-15 23:52:53.320143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.421 [2024-07-15 23:52:53.320151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.421 [2024-07-15 23:52:53.320156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.421 [2024-07-15 23:52:53.322847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.421 [2024-07-15 23:52:53.332431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.421 [2024-07-15 23:52:53.332811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.421 [2024-07-15 23:52:53.332827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.421 [2024-07-15 23:52:53.332833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.421 [2024-07-15 23:52:53.332994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.421 [2024-07-15 23:52:53.333156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.421 [2024-07-15 23:52:53.333164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.421 [2024-07-15 23:52:53.333170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.421 [2024-07-15 23:52:53.336009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.421 [2024-07-15 23:52:53.345516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.421 [2024-07-15 23:52:53.346060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.421 [2024-07-15 23:52:53.346103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.421 [2024-07-15 23:52:53.346124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.421 [2024-07-15 23:52:53.346573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.421 [2024-07-15 23:52:53.346746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.421 [2024-07-15 23:52:53.346754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.421 [2024-07-15 23:52:53.346760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.421 [2024-07-15 23:52:53.349493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.421 [2024-07-15 23:52:53.358574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.421 [2024-07-15 23:52:53.359071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.421 [2024-07-15 23:52:53.359088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.421 [2024-07-15 23:52:53.359095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.421 [2024-07-15 23:52:53.359279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.421 [2024-07-15 23:52:53.359457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.421 [2024-07-15 23:52:53.359464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.421 [2024-07-15 23:52:53.359470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.421 [2024-07-15 23:52:53.362303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.421 [2024-07-15 23:52:53.371413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.421 [2024-07-15 23:52:53.371910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.421 [2024-07-15 23:52:53.371925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.421 [2024-07-15 23:52:53.371932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.421 [2024-07-15 23:52:53.372102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.421 [2024-07-15 23:52:53.372279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.421 [2024-07-15 23:52:53.372287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.421 [2024-07-15 23:52:53.372293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.421 [2024-07-15 23:52:53.375026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.421 [2024-07-15 23:52:53.384435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.421 [2024-07-15 23:52:53.384892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.421 [2024-07-15 23:52:53.384935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.421 [2024-07-15 23:52:53.384963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.421 [2024-07-15 23:52:53.385489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.421 [2024-07-15 23:52:53.385651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.421 [2024-07-15 23:52:53.385658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.421 [2024-07-15 23:52:53.385664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.421 [2024-07-15 23:52:53.388425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.681 [2024-07-15 23:52:53.397453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.681 [2024-07-15 23:52:53.397809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-07-15 23:52:53.397826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.681 [2024-07-15 23:52:53.397834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.681 [2024-07-15 23:52:53.398012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.681 [2024-07-15 23:52:53.398189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.681 [2024-07-15 23:52:53.398197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.681 [2024-07-15 23:52:53.398204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.681 [2024-07-15 23:52:53.400981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.681 [2024-07-15 23:52:53.410295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.681 [2024-07-15 23:52:53.410747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-07-15 23:52:53.410790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.681 [2024-07-15 23:52:53.410812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.681 [2024-07-15 23:52:53.411405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.681 [2024-07-15 23:52:53.411904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.681 [2024-07-15 23:52:53.411911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.681 [2024-07-15 23:52:53.411917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.681 [2024-07-15 23:52:53.414547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.681 [2024-07-15 23:52:53.423250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.681 [2024-07-15 23:52:53.423583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-07-15 23:52:53.423599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.681 [2024-07-15 23:52:53.423606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.681 [2024-07-15 23:52:53.423777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.681 [2024-07-15 23:52:53.423949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.681 [2024-07-15 23:52:53.423960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.681 [2024-07-15 23:52:53.423966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.681 [2024-07-15 23:52:53.426714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.681 [2024-07-15 23:52:53.436321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.681 [2024-07-15 23:52:53.436714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-07-15 23:52:53.436729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.681 [2024-07-15 23:52:53.436735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.681 [2024-07-15 23:52:53.436897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.681 [2024-07-15 23:52:53.437059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.681 [2024-07-15 23:52:53.437066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.681 [2024-07-15 23:52:53.437072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.681 [2024-07-15 23:52:53.439849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.681 [2024-07-15 23:52:53.449387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.681 [2024-07-15 23:52:53.449780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-07-15 23:52:53.449796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.681 [2024-07-15 23:52:53.449803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.681 [2024-07-15 23:52:53.449979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.681 [2024-07-15 23:52:53.450155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.681 [2024-07-15 23:52:53.450162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.681 [2024-07-15 23:52:53.450169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.681 [2024-07-15 23:52:53.453005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.681 [2024-07-15 23:52:53.462620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.681 [2024-07-15 23:52:53.463094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.681 [2024-07-15 23:52:53.463111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.681 [2024-07-15 23:52:53.463118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.463318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.463512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.463521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.463528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.466483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.475862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.476282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.476298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.476306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.476487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.476670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.476678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.476684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.479599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.489241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.489744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.489761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.489768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.489962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.490156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.490164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.490171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.493287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.502646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.503028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.503045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.503053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.503251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.503446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.503454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.503461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.506566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.516098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.516618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.516636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.516644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.516840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.517034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.517042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.517048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.520159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.529259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.529752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.529768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.529775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.529956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.530138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.530146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.530152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.533026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.542528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.543018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.543034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.543041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.543222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.543410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.543417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.543424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.546341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.555965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.556480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.556497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.556505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.556698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.556892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.556900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.556910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.560019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.569380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.569882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.569899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.569907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.570100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.570300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.570309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.570316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.573396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.582879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.583260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.583276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.583284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.583466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.583648] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.583656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.583662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.586720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.596016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.596418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.596434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.596441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.596622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.596805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.596812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.596819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.599738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.609417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.609923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.609939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.609947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.610140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.610340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.610350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.610357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.682 [2024-07-15 23:52:53.613464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.682 [2024-07-15 23:52:53.622819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.682 [2024-07-15 23:52:53.623326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.682 [2024-07-15 23:52:53.623343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.682 [2024-07-15 23:52:53.623351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.682 [2024-07-15 23:52:53.623544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.682 [2024-07-15 23:52:53.623739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.682 [2024-07-15 23:52:53.623747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.682 [2024-07-15 23:52:53.623754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.683 [2024-07-15 23:52:53.626863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.683 [2024-07-15 23:52:53.636142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.683 [2024-07-15 23:52:53.636657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-07-15 23:52:53.636674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.683 [2024-07-15 23:52:53.636682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.683 [2024-07-15 23:52:53.636876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.683 [2024-07-15 23:52:53.637070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.683 [2024-07-15 23:52:53.637078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.683 [2024-07-15 23:52:53.637084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.683 [2024-07-15 23:52:53.640194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.683 [2024-07-15 23:52:53.649369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.683 [2024-07-15 23:52:53.649863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.683 [2024-07-15 23:52:53.649880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.683 [2024-07-15 23:52:53.649888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.683 [2024-07-15 23:52:53.650082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.683 [2024-07-15 23:52:53.650286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.683 [2024-07-15 23:52:53.650295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.683 [2024-07-15 23:52:53.650302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.653452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.662566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.663070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.663087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.663094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.663283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.663465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.663473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.663480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.666402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.675865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.676280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.676297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.676305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.676486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.676668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.676676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.676683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.679543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.688900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.689401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.689444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.689465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.690043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.690310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.690318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.690324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.693150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.701857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.702314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.702329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.702336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.702512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.702674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.702681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.702687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.705361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.714656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.715060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.715075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.715082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.715249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.715437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.715445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.715451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.718106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.727622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.728071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.728086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.728093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.728276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.728449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.728456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.728462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.731140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.740420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.740892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.740910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.740916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.741078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.741246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.741269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.741275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.743944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.753217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.753684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.753727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.753748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.754337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.754766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.754774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.754780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.757423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.766015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.766360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.766376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.766382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.766543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.766705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.766712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.766718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.769406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.778927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.779413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.779456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.779477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.779887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.780052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.780059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.780065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.782816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.791810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.792497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.792545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.792567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.792832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.793008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.793016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.793022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.795738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.804648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.805137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.805180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.805202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.805691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.805863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.805871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.805877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.808511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.817473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.817955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.942 [2024-07-15 23:52:53.817996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.942 [2024-07-15 23:52:53.818017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.942 [2024-07-15 23:52:53.818609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.942 [2024-07-15 23:52:53.818935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.942 [2024-07-15 23:52:53.818942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.942 [2024-07-15 23:52:53.818949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.942 [2024-07-15 23:52:53.821581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.942 [2024-07-15 23:52:53.830320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.942 [2024-07-15 23:52:53.830660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-07-15 23:52:53.830676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.943 [2024-07-15 23:52:53.830682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.943 [2024-07-15 23:52:53.830844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.943 [2024-07-15 23:52:53.831005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.943 [2024-07-15 23:52:53.831012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.943 [2024-07-15 23:52:53.831018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.943 [2024-07-15 23:52:53.833700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.943 [2024-07-15 23:52:53.843513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.943 [2024-07-15 23:52:53.844002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-07-15 23:52:53.844044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.943 [2024-07-15 23:52:53.844064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.943 [2024-07-15 23:52:53.844656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.943 [2024-07-15 23:52:53.845047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.943 [2024-07-15 23:52:53.845057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.943 [2024-07-15 23:52:53.845066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.943 [2024-07-15 23:52:53.849123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.943 [2024-07-15 23:52:53.856925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.943 [2024-07-15 23:52:53.857369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-07-15 23:52:53.857411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.943 [2024-07-15 23:52:53.857432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.943 [2024-07-15 23:52:53.857885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.943 [2024-07-15 23:52:53.858051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.943 [2024-07-15 23:52:53.858058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.943 [2024-07-15 23:52:53.858064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.943 [2024-07-15 23:52:53.860798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.943 [2024-07-15 23:52:53.869840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.943 [2024-07-15 23:52:53.870314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-07-15 23:52:53.870329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.943 [2024-07-15 23:52:53.870338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.943 [2024-07-15 23:52:53.870499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.943 [2024-07-15 23:52:53.870661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.943 [2024-07-15 23:52:53.870668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.943 [2024-07-15 23:52:53.870674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.943 [2024-07-15 23:52:53.873360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.943 [2024-07-15 23:52:53.882626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.943 [2024-07-15 23:52:53.883134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-07-15 23:52:53.883175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.943 [2024-07-15 23:52:53.883196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.943 [2024-07-15 23:52:53.883714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.943 [2024-07-15 23:52:53.883885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.943 [2024-07-15 23:52:53.883893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.943 [2024-07-15 23:52:53.883899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.943 [2024-07-15 23:52:53.886539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.943 [2024-07-15 23:52:53.895493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.943 [2024-07-15 23:52:53.895936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-07-15 23:52:53.895950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.943 [2024-07-15 23:52:53.895957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.943 [2024-07-15 23:52:53.896118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.943 [2024-07-15 23:52:53.896304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.943 [2024-07-15 23:52:53.896312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.943 [2024-07-15 23:52:53.896318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.943 [2024-07-15 23:52:53.898979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.943 [2024-07-15 23:52:53.908435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.943 [2024-07-15 23:52:53.908861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.943 [2024-07-15 23:52:53.908877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:04.943 [2024-07-15 23:52:53.908883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:04.943 [2024-07-15 23:52:53.909045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:04.943 [2024-07-15 23:52:53.909207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.943 [2024-07-15 23:52:53.909216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.943 [2024-07-15 23:52:53.909222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.943 [2024-07-15 23:52:53.912120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.202 [2024-07-15 23:52:53.921416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.202 [2024-07-15 23:52:53.921834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.202 [2024-07-15 23:52:53.921881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.202 [2024-07-15 23:52:53.921904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.202 [2024-07-15 23:52:53.922497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.202 [2024-07-15 23:52:53.923047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.202 [2024-07-15 23:52:53.923055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.202 [2024-07-15 23:52:53.923062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.202 [2024-07-15 23:52:53.925753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.202 [2024-07-15 23:52:53.934396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.202 [2024-07-15 23:52:53.934819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.202 [2024-07-15 23:52:53.934834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.202 [2024-07-15 23:52:53.934841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.202 [2024-07-15 23:52:53.935002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.202 [2024-07-15 23:52:53.935164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.202 [2024-07-15 23:52:53.935172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.202 [2024-07-15 23:52:53.935178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.202 [2024-07-15 23:52:53.937919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.202 [2024-07-15 23:52:53.947339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.202 [2024-07-15 23:52:53.947823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.202 [2024-07-15 23:52:53.947865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.202 [2024-07-15 23:52:53.947886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.202 [2024-07-15 23:52:53.948478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.202 [2024-07-15 23:52:53.949052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.202 [2024-07-15 23:52:53.949059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.202 [2024-07-15 23:52:53.949065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.202 [2024-07-15 23:52:53.951751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.202 [2024-07-15 23:52:53.960202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.202 [2024-07-15 23:52:53.960658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.202 [2024-07-15 23:52:53.960699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.202 [2024-07-15 23:52:53.960720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.202 [2024-07-15 23:52:53.961176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.202 [2024-07-15 23:52:53.961353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.202 [2024-07-15 23:52:53.961361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.202 [2024-07-15 23:52:53.961367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.202 [2024-07-15 23:52:53.964028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.202 [2024-07-15 23:52:53.973078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.202 [2024-07-15 23:52:53.973462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.202 [2024-07-15 23:52:53.973478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.202 [2024-07-15 23:52:53.973485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.202 [2024-07-15 23:52:53.973656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.202 [2024-07-15 23:52:53.973826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.202 [2024-07-15 23:52:53.973834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:53.973840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:53.976587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:53.985984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:53.986432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:53.986448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:53.986455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:53.986626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:53.986797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:53.986804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:53.986810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:53.989503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:53.998920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:53.999400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:53.999442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:53.999463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.000040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.000203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.000210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.000215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.002960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.011832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.012453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.012471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.012478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.012641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.012803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.012810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.012816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.015500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.024673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.025142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.025184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.025205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.025800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.026320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.026328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.026334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.030128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.038419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.038883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.038924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.038945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.039371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.039543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.039550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.039560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.042266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.051332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.051722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.051737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.051744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.051905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.052067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.052074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.052080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.054769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.064196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.064683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.064725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.064747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.065252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.065424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.065431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.065437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.068150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.077182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.077575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.077590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.077597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.077758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.077920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.077927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.077933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.080617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.090037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.090509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.090552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.090573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.091150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.091386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.091394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.091400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.094301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.102980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.103472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.103516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.103537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.104114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.104325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.104337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.104344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.107062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.115900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.116370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.116386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.116392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.116554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.116716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.116723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.116729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.119461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.128772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.129244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.203 [2024-07-15 23:52:54.129259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.203 [2024-07-15 23:52:54.129265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.203 [2024-07-15 23:52:54.129427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.203 [2024-07-15 23:52:54.129593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.203 [2024-07-15 23:52:54.129600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.203 [2024-07-15 23:52:54.129606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.203 [2024-07-15 23:52:54.132291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.203 [2024-07-15 23:52:54.141652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.203 [2024-07-15 23:52:54.142020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.204 [2024-07-15 23:52:54.142035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.204 [2024-07-15 23:52:54.142041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.204 [2024-07-15 23:52:54.142203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.204 [2024-07-15 23:52:54.142393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.204 [2024-07-15 23:52:54.142401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.204 [2024-07-15 23:52:54.142407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.204 [2024-07-15 23:52:54.145064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.204 [2024-07-15 23:52:54.154492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.204 [2024-07-15 23:52:54.154891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.204 [2024-07-15 23:52:54.154906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.204 [2024-07-15 23:52:54.154912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.204 [2024-07-15 23:52:54.155074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.204 [2024-07-15 23:52:54.155241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.204 [2024-07-15 23:52:54.155248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.204 [2024-07-15 23:52:54.155270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.204 [2024-07-15 23:52:54.157938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.204 [2024-07-15 23:52:54.167360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.204 [2024-07-15 23:52:54.167837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.204 [2024-07-15 23:52:54.167878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.204 [2024-07-15 23:52:54.167899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.204 [2024-07-15 23:52:54.168363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.204 [2024-07-15 23:52:54.168534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.204 [2024-07-15 23:52:54.168541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.204 [2024-07-15 23:52:54.168547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.204 [2024-07-15 23:52:54.171347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.463 [2024-07-15 23:52:54.180414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.463 [2024-07-15 23:52:54.180922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-07-15 23:52:54.180967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.463 [2024-07-15 23:52:54.180991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.463 [2024-07-15 23:52:54.181516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.463 [2024-07-15 23:52:54.181688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.463 [2024-07-15 23:52:54.181696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.463 [2024-07-15 23:52:54.181702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.463 [2024-07-15 23:52:54.184425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.463 [2024-07-15 23:52:54.193355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.463 [2024-07-15 23:52:54.193873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-07-15 23:52:54.193917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.463 [2024-07-15 23:52:54.193939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.463 [2024-07-15 23:52:54.194457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.463 [2024-07-15 23:52:54.194629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.463 [2024-07-15 23:52:54.194636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.463 [2024-07-15 23:52:54.194642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.463 [2024-07-15 23:52:54.197290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.463 [2024-07-15 23:52:54.206203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.463 [2024-07-15 23:52:54.206692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-07-15 23:52:54.206736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.463 [2024-07-15 23:52:54.206757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.463 [2024-07-15 23:52:54.207347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.463 [2024-07-15 23:52:54.207837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.463 [2024-07-15 23:52:54.207857] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.463 [2024-07-15 23:52:54.207863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.463 [2024-07-15 23:52:54.211662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.463 [2024-07-15 23:52:54.219996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.463 [2024-07-15 23:52:54.220403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-07-15 23:52:54.220447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.463 [2024-07-15 23:52:54.220476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.463 [2024-07-15 23:52:54.221054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.463 [2024-07-15 23:52:54.221643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.463 [2024-07-15 23:52:54.221668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.463 [2024-07-15 23:52:54.221688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.463 [2024-07-15 23:52:54.224405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.463 [2024-07-15 23:52:54.232868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.463 [2024-07-15 23:52:54.233325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.463 [2024-07-15 23:52:54.233341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.463 [2024-07-15 23:52:54.233347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.463 [2024-07-15 23:52:54.233508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.463 [2024-07-15 23:52:54.233669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.463 [2024-07-15 23:52:54.233676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.463 [2024-07-15 23:52:54.233682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.463 [2024-07-15 23:52:54.236370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.245708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.246160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.246176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.246182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.246373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.246546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.246553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.246559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.249218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.258707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.259204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.259257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.259279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.259856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.260031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.260042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.260048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.262879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.271522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.272008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.272050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.272071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.272612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.272784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.272792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.272798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.275437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.284313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.284691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.284705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.284711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.284873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.285035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.285042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.285048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.287732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.297160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.297645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.297688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.297709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.298305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.298887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.298910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.298947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1157696 Killed "${NVMF_APP[@]}" "$@" 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.464 [2024-07-15 23:52:54.302995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1159105 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1159105 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1159105 ']' 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:05.464 [2024-07-15 23:52:54.310710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:05.464 [2024-07-15 23:52:54.311195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.311211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.311218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 23:52:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.464 [2024-07-15 23:52:54.311399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.311576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.311584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.311590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.314419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.323765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.324233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.324248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.324255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.324431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.324607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.324614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.324620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.327450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.336815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.337303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.337319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.337326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.337502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.337679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.337686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.337693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.340519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.349882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.350369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.464 [2024-07-15 23:52:54.350386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.464 [2024-07-15 23:52:54.350393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.464 [2024-07-15 23:52:54.350577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.464 [2024-07-15 23:52:54.350749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.464 [2024-07-15 23:52:54.350756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.464 [2024-07-15 23:52:54.350762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.464 [2024-07-15 23:52:54.353535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.464 [2024-07-15 23:52:54.359757] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:27:05.464 [2024-07-15 23:52:54.359795] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.464 [2024-07-15 23:52:54.362934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.464 [2024-07-15 23:52:54.363399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-07-15 23:52:54.363415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.465 [2024-07-15 23:52:54.363422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.465 [2024-07-15 23:52:54.363594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.465 [2024-07-15 23:52:54.363765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.465 [2024-07-15 23:52:54.363773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.465 [2024-07-15 23:52:54.363780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.465 [2024-07-15 23:52:54.366586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.465 [2024-07-15 23:52:54.375993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.465 [2024-07-15 23:52:54.376484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-07-15 23:52:54.376499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.465 [2024-07-15 23:52:54.376507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.465 [2024-07-15 23:52:54.376683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.465 [2024-07-15 23:52:54.376859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.465 [2024-07-15 23:52:54.376866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.465 [2024-07-15 23:52:54.376873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.465 [2024-07-15 23:52:54.379822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.465 [2024-07-15 23:52:54.389123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.465 [2024-07-15 23:52:54.389615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-07-15 23:52:54.389633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.465 [2024-07-15 23:52:54.389640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.465 [2024-07-15 23:52:54.389817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.465 [2024-07-15 23:52:54.389994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.465 [2024-07-15 23:52:54.390002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.465 [2024-07-15 23:52:54.390009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.465 [2024-07-15 23:52:54.392844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.465 [2024-07-15 23:52:54.402206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.465 [2024-07-15 23:52:54.402706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-07-15 23:52:54.402722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.465 [2024-07-15 23:52:54.402729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.465 [2024-07-15 23:52:54.402906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.465 [2024-07-15 23:52:54.403082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.465 [2024-07-15 23:52:54.403090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.465 [2024-07-15 23:52:54.403097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.465 [2024-07-15 23:52:54.405926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.465 [2024-07-15 23:52:54.415328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.465 [2024-07-15 23:52:54.415832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-07-15 23:52:54.415848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.465 [2024-07-15 23:52:54.415855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.465 [2024-07-15 23:52:54.416029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.465 [2024-07-15 23:52:54.416202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.465 [2024-07-15 23:52:54.416209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.465 [2024-07-15 23:52:54.416215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.465 [2024-07-15 23:52:54.417178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:05.465 [2024-07-15 23:52:54.419061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.465 [2024-07-15 23:52:54.428457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.465 [2024-07-15 23:52:54.428851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.465 [2024-07-15 23:52:54.428870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.465 [2024-07-15 23:52:54.428877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.465 [2024-07-15 23:52:54.429049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.465 [2024-07-15 23:52:54.429221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.465 [2024-07-15 23:52:54.429233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.465 [2024-07-15 23:52:54.429240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.465 [2024-07-15 23:52:54.432078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.441684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.442191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.442208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.442216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.725 [2024-07-15 23:52:54.442400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.725 [2024-07-15 23:52:54.442578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.725 [2024-07-15 23:52:54.442586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.725 [2024-07-15 23:52:54.442592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.725 [2024-07-15 23:52:54.445425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.454796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.455283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.455300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.455307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.725 [2024-07-15 23:52:54.455480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.725 [2024-07-15 23:52:54.455651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.725 [2024-07-15 23:52:54.455659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.725 [2024-07-15 23:52:54.455669] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.725 [2024-07-15 23:52:54.458468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.467869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.468383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.468403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.468411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.725 [2024-07-15 23:52:54.468584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.725 [2024-07-15 23:52:54.468756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.725 [2024-07-15 23:52:54.468763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.725 [2024-07-15 23:52:54.468770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.725 [2024-07-15 23:52:54.471579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.481008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.481490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.481507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.481514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.725 [2024-07-15 23:52:54.481690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.725 [2024-07-15 23:52:54.481867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.725 [2024-07-15 23:52:54.481875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.725 [2024-07-15 23:52:54.481881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.725 [2024-07-15 23:52:54.484709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.494060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.494479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.494496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.494503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.725 [2024-07-15 23:52:54.494679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.725 [2024-07-15 23:52:54.494856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.725 [2024-07-15 23:52:54.494864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.725 [2024-07-15 23:52:54.494870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.725 [2024-07-15 23:52:54.497701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.500592] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.725 [2024-07-15 23:52:54.500620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.725 [2024-07-15 23:52:54.500628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.725 [2024-07-15 23:52:54.500634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.725 [2024-07-15 23:52:54.500639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.725 [2024-07-15 23:52:54.500747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.725 [2024-07-15 23:52:54.500816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.725 [2024-07-15 23:52:54.500870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.725 [2024-07-15 23:52:54.507228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.507732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.507750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.507758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.725 [2024-07-15 23:52:54.507936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.725 [2024-07-15 23:52:54.508113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.725 [2024-07-15 23:52:54.508122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.725 [2024-07-15 23:52:54.508128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.725 [2024-07-15 23:52:54.510963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.520327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.520853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.520873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.520881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.725 [2024-07-15 23:52:54.521059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.725 [2024-07-15 23:52:54.521241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.725 [2024-07-15 23:52:54.521251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.725 [2024-07-15 23:52:54.521257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.725 [2024-07-15 23:52:54.524082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.725 [2024-07-15 23:52:54.533436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.725 [2024-07-15 23:52:54.533968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.725 [2024-07-15 23:52:54.533988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.725 [2024-07-15 23:52:54.533996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.534174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.534356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.534365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.534382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.537204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.546570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.547074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.547094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.547102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.547293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.547471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.547479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.547486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.550311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.559656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.560140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.560159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.560167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.560349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.560527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.560535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.560542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.563367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.572710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.573178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.573194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.573201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.573383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.573565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.573573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.573580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.576403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.585756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.586233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.586250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.586257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.586435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.586612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.586621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.586627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.589455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.598961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.599377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.599393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.599401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.599578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.599754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.599762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.599768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.602594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.612110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.612522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.612539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.612546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.612722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.612900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.612908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.612914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.615746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.625255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.625748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.625764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.625772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.625948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.626129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.626137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.626143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.628973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.638312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.638772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.638788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.638795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.638971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.639149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.639157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.639163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.641990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.651506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.651992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.652007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.652014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.652191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.652371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.652380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.652385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.655210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.664555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.665038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.665054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.726 [2024-07-15 23:52:54.665061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.726 [2024-07-15 23:52:54.665241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.726 [2024-07-15 23:52:54.665421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.726 [2024-07-15 23:52:54.665430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.726 [2024-07-15 23:52:54.665436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.726 [2024-07-15 23:52:54.668271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.726 [2024-07-15 23:52:54.677618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.726 [2024-07-15 23:52:54.678061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.726 [2024-07-15 23:52:54.678077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.727 [2024-07-15 23:52:54.678084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.727 [2024-07-15 23:52:54.678266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.727 [2024-07-15 23:52:54.678444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.727 [2024-07-15 23:52:54.678452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.727 [2024-07-15 23:52:54.678459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.727 [2024-07-15 23:52:54.681286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.727 [2024-07-15 23:52:54.690795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.727 [2024-07-15 23:52:54.691203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.727 [2024-07-15 23:52:54.691220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.727 [2024-07-15 23:52:54.691234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.727 [2024-07-15 23:52:54.691411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.727 [2024-07-15 23:52:54.691588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.727 [2024-07-15 23:52:54.691597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.727 [2024-07-15 23:52:54.691603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.727 [2024-07-15 23:52:54.694469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.986 [2024-07-15 23:52:54.703903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.986 [2024-07-15 23:52:54.704376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-07-15 23:52:54.704394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.986 [2024-07-15 23:52:54.704402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.986 [2024-07-15 23:52:54.704579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.986 [2024-07-15 23:52:54.704755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.986 [2024-07-15 23:52:54.704763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.986 [2024-07-15 23:52:54.704770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.986 [2024-07-15 23:52:54.707601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.986 [2024-07-15 23:52:54.716961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.986 [2024-07-15 23:52:54.717429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-07-15 23:52:54.717450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.986 [2024-07-15 23:52:54.717457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.986 [2024-07-15 23:52:54.717635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.986 [2024-07-15 23:52:54.717812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.986 [2024-07-15 23:52:54.717821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.986 [2024-07-15 23:52:54.717827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.986 [2024-07-15 23:52:54.720660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.986 [2024-07-15 23:52:54.730026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.986 [2024-07-15 23:52:54.730486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-07-15 23:52:54.730503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.986 [2024-07-15 23:52:54.730510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.986 [2024-07-15 23:52:54.730686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.986 [2024-07-15 23:52:54.730864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.986 [2024-07-15 23:52:54.730872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.986 [2024-07-15 23:52:54.730878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.986 [2024-07-15 23:52:54.733710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.986 [2024-07-15 23:52:54.743233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.986 [2024-07-15 23:52:54.743720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-07-15 23:52:54.743736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.986 [2024-07-15 23:52:54.743743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.986 [2024-07-15 23:52:54.743919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.986 [2024-07-15 23:52:54.744096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.986 [2024-07-15 23:52:54.744104] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.986 [2024-07-15 23:52:54.744110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.986 [2024-07-15 23:52:54.746942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.986 [2024-07-15 23:52:54.756331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.986 [2024-07-15 23:52:54.756674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.986 [2024-07-15 23:52:54.756689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.986 [2024-07-15 23:52:54.756696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.986 [2024-07-15 23:52:54.756873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.986 [2024-07-15 23:52:54.757054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.986 [2024-07-15 23:52:54.757062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.986 [2024-07-15 23:52:54.757069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.759901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.769419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.769811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.769827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.769834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.770011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.770188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.770195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.770202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.773030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.782559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.783000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.783018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.783025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.783201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.783383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.783391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.783398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.786227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.795750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.796238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.796254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.796261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.796438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.796614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.796622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.796628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.799460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.808822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.809230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.809247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.809254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.809430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.809613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.809620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.809626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.812456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.821989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.822456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.822473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.822480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.822657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.822833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.822841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.822847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.825677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.835052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.835385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.835402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.835409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.835586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.835763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.835772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.835778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.838607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.848134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.848510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.848526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.848536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.848713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.848889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.848897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.848904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.851733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.861250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.861668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.861683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.861690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.861867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.862043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.862051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.862057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.864887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.874406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.874744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.874760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.874766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.874943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.875120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.875128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.875133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.877961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.887485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.887893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.887909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.887916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.888092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.888275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.888287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.987 [2024-07-15 23:52:54.888293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.987 [2024-07-15 23:52:54.891117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.987 [2024-07-15 23:52:54.900638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.987 [2024-07-15 23:52:54.901055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.987 [2024-07-15 23:52:54.901070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.987 [2024-07-15 23:52:54.901077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.987 [2024-07-15 23:52:54.901258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.987 [2024-07-15 23:52:54.901434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.987 [2024-07-15 23:52:54.901442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.988 [2024-07-15 23:52:54.901448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.988 [2024-07-15 23:52:54.904274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.988 [2024-07-15 23:52:54.913795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.988 [2024-07-15 23:52:54.914283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-07-15 23:52:54.914299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.988 [2024-07-15 23:52:54.914306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.988 [2024-07-15 23:52:54.914481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.988 [2024-07-15 23:52:54.914658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.988 [2024-07-15 23:52:54.914666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.988 [2024-07-15 23:52:54.914672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.988 [2024-07-15 23:52:54.917498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.988 [2024-07-15 23:52:54.926849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.988 [2024-07-15 23:52:54.927310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-07-15 23:52:54.927327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.988 [2024-07-15 23:52:54.927334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.988 [2024-07-15 23:52:54.927510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.988 [2024-07-15 23:52:54.927686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.988 [2024-07-15 23:52:54.927694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.988 [2024-07-15 23:52:54.927700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.988 [2024-07-15 23:52:54.930530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.988 [2024-07-15 23:52:54.940048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.988 [2024-07-15 23:52:54.940411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-07-15 23:52:54.940427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.988 [2024-07-15 23:52:54.940434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.988 [2024-07-15 23:52:54.940610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.988 [2024-07-15 23:52:54.940786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.988 [2024-07-15 23:52:54.940794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.988 [2024-07-15 23:52:54.940800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.988 [2024-07-15 23:52:54.943626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:05.988 [2024-07-15 23:52:54.953088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:05.988 [2024-07-15 23:52:54.953449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.988 [2024-07-15 23:52:54.953465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:05.988 [2024-07-15 23:52:54.953471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:05.988 [2024-07-15 23:52:54.953648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:05.988 [2024-07-15 23:52:54.953823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:05.988 [2024-07-15 23:52:54.953831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:05.988 [2024-07-15 23:52:54.953837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:05.988 [2024-07-15 23:52:54.956707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.247 [2024-07-15 23:52:54.966288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.247 [2024-07-15 23:52:54.966645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-07-15 23:52:54.966662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.247 [2024-07-15 23:52:54.966670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.247 [2024-07-15 23:52:54.966847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.247 [2024-07-15 23:52:54.967024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.247 [2024-07-15 23:52:54.967033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.247 [2024-07-15 23:52:54.967039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.247 [2024-07-15 23:52:54.969867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.247 [2024-07-15 23:52:54.979382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.247 [2024-07-15 23:52:54.979730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-07-15 23:52:54.979747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.247 [2024-07-15 23:52:54.979754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.247 [2024-07-15 23:52:54.979935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.247 [2024-07-15 23:52:54.980111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.247 [2024-07-15 23:52:54.980119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.247 [2024-07-15 23:52:54.980125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.247 [2024-07-15 23:52:54.982962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.247 [2024-07-15 23:52:54.992499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.247 [2024-07-15 23:52:54.993043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.247 [2024-07-15 23:52:54.993060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.247 [2024-07-15 23:52:54.993067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.247 [2024-07-15 23:52:54.993249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.247 [2024-07-15 23:52:54.993426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.247 [2024-07-15 23:52:54.993435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.247 [2024-07-15 23:52:54.993441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.247 [2024-07-15 23:52:54.996273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.247 [2024-07-15 23:52:55.005630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.006113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.006130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.006137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.006318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.006495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.006503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.006509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.009339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.018741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.019093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.019111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.019119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.019303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.019484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.019492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.019502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.022337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.031857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.032269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.032286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.032293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.032470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.032647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.032656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.032663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.035499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.045019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.045977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.046001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.046009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.046195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.046380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.046389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.046395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.049254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.058139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.058624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.058642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.058650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.058827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.059004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.059012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.059018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.061850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.071210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.071626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.071643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.071651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.071829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.072006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.072014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.072020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.074852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.084368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.084775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.084792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.084798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.084975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.085153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.085161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.085167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.087997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.097516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.097894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.097910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.097917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.098093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.098274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.098283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.098289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.101112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.110627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.111136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.111152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.111159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.111341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.111521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.111529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.111536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.114365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.123711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.124179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.124196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.124203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.124386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.124563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.124572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.124578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.127403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.136907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.248 [2024-07-15 23:52:55.137391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.248 [2024-07-15 23:52:55.137408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.248 [2024-07-15 23:52:55.137415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.248 [2024-07-15 23:52:55.137591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.248 [2024-07-15 23:52:55.137770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.248 [2024-07-15 23:52:55.137778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.248 [2024-07-15 23:52:55.137784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.248 [2024-07-15 23:52:55.140611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.248 [2024-07-15 23:52:55.149960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.249 [2024-07-15 23:52:55.150421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-07-15 23:52:55.150438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.249 [2024-07-15 23:52:55.150444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.249 [2024-07-15 23:52:55.150621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.249 [2024-07-15 23:52:55.150798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.249 [2024-07-15 23:52:55.150806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.249 [2024-07-15 23:52:55.150812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.249 [2024-07-15 23:52:55.153644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.249 [2024-07-15 23:52:55.163144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.249 [2024-07-15 23:52:55.163613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-07-15 23:52:55.163630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.249 [2024-07-15 23:52:55.163637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.249 [2024-07-15 23:52:55.163813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.249 [2024-07-15 23:52:55.163990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.249 [2024-07-15 23:52:55.163998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.249 [2024-07-15 23:52:55.164004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.249 [2024-07-15 23:52:55.166830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:06.249 [2024-07-15 23:52:55.176336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.249 [2024-07-15 23:52:55.176786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-07-15 23:52:55.176802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.249 [2024-07-15 23:52:55.176810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.249 [2024-07-15 23:52:55.176986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.249 [2024-07-15 23:52:55.177163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.249 [2024-07-15 23:52:55.177171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.249 [2024-07-15 23:52:55.177177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.249 [2024-07-15 23:52:55.180006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.249 [2024-07-15 23:52:55.189372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.249 [2024-07-15 23:52:55.189837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-07-15 23:52:55.189854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.249 [2024-07-15 23:52:55.189861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.249 [2024-07-15 23:52:55.190037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.249 [2024-07-15 23:52:55.190212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.249 [2024-07-15 23:52:55.190220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.249 [2024-07-15 23:52:55.190231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.249 [2024-07-15 23:52:55.193061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.249 [2024-07-15 23:52:55.202575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.249 [2024-07-15 23:52:55.203023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-07-15 23:52:55.203040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.249 [2024-07-15 23:52:55.203046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.249 [2024-07-15 23:52:55.203223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.249 [2024-07-15 23:52:55.203407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.249 [2024-07-15 23:52:55.203416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.249 [2024-07-15 23:52:55.203422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.249 [2024-07-15 23:52:55.206249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:06.249 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.249 [2024-07-15 23:52:55.215788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.249 [2024-07-15 23:52:55.216277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.249 [2024-07-15 23:52:55.216293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.249 [2024-07-15 23:52:55.216301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.249 [2024-07-15 23:52:55.216477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.249 [2024-07-15 23:52:55.216654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.249 [2024-07-15 23:52:55.216662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.249 [2024-07-15 23:52:55.216668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.249 [2024-07-15 23:52:55.217304] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.508 [2024-07-15 23:52:55.219535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.508 [2024-07-15 23:52:55.228924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.508 [2024-07-15 23:52:55.229402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.508 [2024-07-15 23:52:55.229419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.508 [2024-07-15 23:52:55.229427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.508 [2024-07-15 23:52:55.229604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.508 [2024-07-15 23:52:55.229785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.508 [2024-07-15 23:52:55.229793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.508 [2024-07-15 23:52:55.229799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.508 [2024-07-15 23:52:55.232630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.508 [2024-07-15 23:52:55.241966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.508 [2024-07-15 23:52:55.242411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.508 [2024-07-15 23:52:55.242428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.508 [2024-07-15 23:52:55.242435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.508 [2024-07-15 23:52:55.242611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.508 [2024-07-15 23:52:55.242788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.508 [2024-07-15 23:52:55.242796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.508 [2024-07-15 23:52:55.242802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.508 [2024-07-15 23:52:55.245630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.508 [2024-07-15 23:52:55.255138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.508 [2024-07-15 23:52:55.255606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.508 [2024-07-15 23:52:55.255623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.508 [2024-07-15 23:52:55.255630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.508 [2024-07-15 23:52:55.255808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.508 [2024-07-15 23:52:55.255985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.508 [2024-07-15 23:52:55.255993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.508 [2024-07-15 23:52:55.255999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.508 [2024-07-15 23:52:55.258828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.508 Malloc0 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.508 [2024-07-15 23:52:55.268246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.508 [2024-07-15 23:52:55.268718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.508 [2024-07-15 23:52:55.268734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.508 [2024-07-15 23:52:55.268741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.508 [2024-07-15 23:52:55.268918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.508 [2024-07-15 23:52:55.269100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.508 [2024-07-15 23:52:55.269109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.508 [2024-07-15 23:52:55.269115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.508 [2024-07-15 23:52:55.271946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.508 [2024-07-15 23:52:55.281289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.508 [2024-07-15 23:52:55.281753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.508 [2024-07-15 23:52:55.281770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2504980 with addr=10.0.0.2, port=4420 00:27:06.508 [2024-07-15 23:52:55.281777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2504980 is same with the state(5) to be set 00:27:06.508 [2024-07-15 23:52:55.281953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2504980 (9): Bad file descriptor 00:27:06.508 [2024-07-15 23:52:55.282130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.508 [2024-07-15 23:52:55.282138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.508 [2024-07-15 23:52:55.282144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.508 [2024-07-15 23:52:55.284974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.508 [2024-07-15 23:52:55.290364] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.508 [2024-07-15 23:52:55.294489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:06.508 23:52:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1158177 00:27:06.767 [2024-07-15 23:52:55.483930] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:14.870 00:27:14.870 Latency(us) 00:27:14.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.870 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:14.870 Verification LBA range: start 0x0 length 0x4000 00:27:14.870 Nvme1n1 : 15.00 7991.84 31.22 13110.69 0.00 6045.57 662.48 18350.08 00:27:14.870 =================================================================================================================== 00:27:14.870 Total : 7991.84 31.22 13110.69 0.00 6045.57 662.48 18350.08 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.127 rmmod nvme_tcp 00:27:15.127 rmmod nvme_fabrics 00:27:15.127 rmmod nvme_keyring 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1159105 ']' 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1159105 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@942 -- # '[' -z 1159105 ']' 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # kill -0 1159105 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # uname 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:15.127 23:53:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1159105 00:27:15.127 23:53:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:27:15.127 23:53:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:27:15.127 23:53:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1159105' 00:27:15.127 killing process with pid 1159105 00:27:15.127 23:53:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@961 -- # kill 1159105 00:27:15.127 23:53:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # wait 1159105 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.386 23:53:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.917 23:53:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:17.917 00:27:17.917 real 0m26.000s 00:27:17.917 user 1m2.946s 00:27:17.917 sys 0m6.081s 00:27:17.917 23:53:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:17.917 23:53:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.917 ************************************ 00:27:17.917 END TEST nvmf_bdevperf 00:27:17.917 ************************************ 00:27:17.917 23:53:06 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:27:17.917 23:53:06 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:17.917 23:53:06 nvmf_tcp -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:27:17.917 23:53:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:17.917 23:53:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:17.917 ************************************ 00:27:17.917 START TEST nvmf_target_disconnect 00:27:17.917 ************************************ 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:17.917 * Looking for test storage... 00:27:17.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.917 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:17.918 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:17.918 23:53:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:17.918 23:53:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:23.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:23.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:23.247 Found net devices under 0000:86:00.0: cvl_0_0 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:23.247 Found net devices under 0000:86:00.1: cvl_0_1 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.247 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:27:23.248 00:27:23.248 --- 10.0.0.2 ping statistics --- 00:27:23.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.248 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:27:23.248 00:27:23.248 --- 10.0.0.1 ping statistics --- 00:27:23.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.248 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.248 ************************************ 00:27:23.248 START TEST nvmf_target_disconnect_tc1 00:27:23.248 ************************************ 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc1 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # local es=0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:23.248 [2024-07-15 23:53:11.883275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.248 [2024-07-15 23:53:11.883313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e8e60 with addr=10.0.0.2, port=4420 00:27:23.248 [2024-07-15 23:53:11.883330] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:23.248 [2024-07-15 23:53:11.883339] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:23.248 [2024-07-15 23:53:11.883344] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:23.248 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:23.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:23.248 Initializing NVMe Controllers 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # es=1 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:27:23.248 00:27:23.248 real 0m0.098s 00:27:23.248 user 0m0.043s 00:27:23.248 sys 0m0.053s 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:23.248 ************************************ 00:27:23.248 END TEST nvmf_target_disconnect_tc1 00:27:23.248 ************************************ 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:23.248 ************************************ 00:27:23.248 START TEST nvmf_target_disconnect_tc2 00:27:23.248 ************************************ 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc2 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1164098 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1164098 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1164098 ']' 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:23.248 23:53:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:23.248 [2024-07-15 23:53:12.022021] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:27:23.248 [2024-07-15 23:53:12.022061] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.248 [2024-07-15 23:53:12.092607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:23.248 [2024-07-15 23:53:12.172328] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.248 [2024-07-15 23:53:12.172365] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.248 [2024-07-15 23:53:12.172371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.248 [2024-07-15 23:53:12.172377] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.248 [2024-07-15 23:53:12.172382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.248 [2024-07-15 23:53:12.172494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:23.248 [2024-07-15 23:53:12.173001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:23.248 [2024-07-15 23:53:12.173028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:23.248 [2024-07-15 23:53:12.173029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.184 Malloc0 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.184 [2024-07-15 23:53:12.893917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.184 [2024-07-15 23:53:12.922898] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1164293 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:24.184 23:53:12 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.092 23:53:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1164098 00:27:26.092 23:53:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Write completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Write completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Write completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Read completed with error (sct=0, sc=8) 00:27:26.092 starting I/O failed 00:27:26.092 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 [2024-07-15 23:53:14.949055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 [2024-07-15 23:53:14.949261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Write completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 Read completed with error (sct=0, sc=8) 00:27:26.093 starting I/O failed 00:27:26.093 [2024-07-15 23:53:14.949457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:26.093 [2024-07-15 23:53:14.949687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.949704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.949866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.949876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.950079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.950088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.950218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.950232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.950381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.950391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.950590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.950600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.950761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.950772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.951026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.951036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.951260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.951270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.951407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.951417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.093 [2024-07-15 23:53:14.951559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.093 [2024-07-15 23:53:14.951569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.093 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.951696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.951705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.952003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.952013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.952302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.952332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.952513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.952542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.952774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.952804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.953113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.953143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.953458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.953489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.953723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.953753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.954092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.954122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.954346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.954376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.954559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.954599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.954752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.954762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.955032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.955043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.955222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.955237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.955402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.955432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.955678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.955707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.956047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.956077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.956374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.956405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.956579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.956608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.956921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.956951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.957222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.957264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.957518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.957547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.957722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.957751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.957993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.958023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.958288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.958319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.958506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.958516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.958648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.958658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.958819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.958836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.959114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.959125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.959386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.959397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.959596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.959607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.959856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.959866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.960142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.960151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.960456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.960466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.960617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.960627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.960780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.960790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.961120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.961151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.961409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.961440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.961717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.961747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.962005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.962034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.094 [2024-07-15 23:53:14.962339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.094 [2024-07-15 23:53:14.962356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.094 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.962557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.962568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.962770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.962780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.963035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.963065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.963409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.963440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.963678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.963688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.963917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.963927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.964132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.964143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.964384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.964395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.964595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.964605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.964823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.964833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.964961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.964971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.965199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.965209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.965462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.965473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.965590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.965600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.965726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.965736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.965955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.965965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.966189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.966200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.966406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.966417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.966562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.966572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.966719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.966730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.966874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.966884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.967022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.967032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.967354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.967365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.967510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.967520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.967781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.967792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.967996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.968006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.968220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.968243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.968392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.968402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.968553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.968563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.968843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.968853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.969032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.969042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.969322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.969332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.969597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.969608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.969745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.969755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.969972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.970001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.970177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.970206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.970414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.970444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.970742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.970752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.970965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.970975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.971177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.095 [2024-07-15 23:53:14.971187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.095 qpair failed and we were unable to recover it. 00:27:26.095 [2024-07-15 23:53:14.971445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.971456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.971661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.971671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.971807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.971816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.972013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.972022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.972213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.972223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.972386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.972396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.972533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.972544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.972727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.972737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.972931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.972941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.973123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.973158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.973398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.973428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.973649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.973679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.973876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.973905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.974256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.974286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.974600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.974631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.974862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.974892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.975199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.975249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.975518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.975548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.975838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.975867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.976155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.976185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.976441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.976452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.976603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.976613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.976832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.976862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.977092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.977122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.977433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.977464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.977773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.977802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.978062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.978097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.978434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.978444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.978630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.978640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.978867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.978896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.979135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.979164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.979391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.979422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.979617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.979648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.979880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.979910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.980199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.980238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.980421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.980450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.980753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.980783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.981098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.981134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.981335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.981346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.981546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.981557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.981786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.096 [2024-07-15 23:53:14.981796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.096 qpair failed and we were unable to recover it. 00:27:26.096 [2024-07-15 23:53:14.981924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.981934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.982207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.982247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.982576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.982605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.982921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.982950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.983247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.983278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.983567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.983596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.983834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.983863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.984117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.984147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.984372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.984402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.984695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.984705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.984907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.984917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.985107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.985118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.985411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.985442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.985667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.985696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.986009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.986039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.986277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.986308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.986567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.986577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.986882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.986893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.987156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.987166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.987445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.987456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.987654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.987664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.987910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.987919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.988116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.988126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.988337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.988347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.988569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.988579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.988794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.988806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.989088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.989098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.989343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.989354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.989604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.989614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.989761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.989771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.990091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.990120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.990412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.990443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.990732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.097 [2024-07-15 23:53:14.990742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.097 qpair failed and we were unable to recover it. 00:27:26.097 [2024-07-15 23:53:14.991043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.991054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.991235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.991246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.991463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.991473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.991651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.991661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.991867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.991896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.992130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.992159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.992412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.992444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.992683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.992693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.992913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.992923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.993188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.993198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.993336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.993346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.993580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.993610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.993910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.993939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.994275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.994306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.994535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.994565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.994787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.994817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.995000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.995029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.995321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.995352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.995674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.995684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.996009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.996039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.996367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.996377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.996576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.996586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.996731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.996742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.996883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.996893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.997088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.997098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.997306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.997317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.997516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.997545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.997791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.997820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.998109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.998139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.998378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.998409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.998633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.998663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.999021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.999050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.999277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.999312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.999589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.999599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:14.999800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:14.999810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:15.000011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:15.000021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:15.000293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:15.000324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:15.000559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:15.000588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:15.000834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:15.000864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:15.001124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:15.001154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.098 [2024-07-15 23:53:15.001383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.098 [2024-07-15 23:53:15.001415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.098 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.001660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.001670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.001812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.001822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.002085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.002115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.002361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.002392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.002623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.002654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.002823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.002854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.003142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.003172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.003408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.003418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.003572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.003581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.003835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.003865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.004121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.004151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.004398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.004430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.004601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.004631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.004872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.004902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.005246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.005278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.005523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.005553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.005743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.005773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.006037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.006066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.006311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.006343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.006581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.006611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.006799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.006809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.007026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.007036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.007161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.007171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.007379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.007389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.007587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.007597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.007813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.007824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.008023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.008033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.008234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.008245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.008458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.008487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.008677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.008707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.008973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.009003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.009271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.009308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.009555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.009586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.009746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.009756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.010116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.010146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.010339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.010370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.010634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.010665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.010974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.011003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.011320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.011352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.011578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.011608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.099 qpair failed and we were unable to recover it. 00:27:26.099 [2024-07-15 23:53:15.011786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.099 [2024-07-15 23:53:15.011796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.012037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.012067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.012327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.012357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.012530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.012560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.012813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.012843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.013138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.013168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.013441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.013472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.013697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.013728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.013916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.013945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.014120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.014149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.014395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.014405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.014632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.014641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.014838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.014848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.015151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.015181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.015376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.015407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.015650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.015680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.015924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.015954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.016244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.016275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.016469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.016499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.016810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.016840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.017090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.017120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.017380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.017390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.017635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.017645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.017825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.017835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.018051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.018061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.018328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.018359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.018551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.018580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.018776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.018806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.019046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.019076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.019371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.019402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.019591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.019620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.019831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.019843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.019974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.019985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.020181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.020214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.020422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.020452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.020700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.020742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.021046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.021056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.021344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.021375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.021570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.021599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.021779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.021809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.022124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.100 [2024-07-15 23:53:15.022153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.100 qpair failed and we were unable to recover it. 00:27:26.100 [2024-07-15 23:53:15.022427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.022459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.022704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.022734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.022903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.022933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.023189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.023219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.023526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.023536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.023782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.023792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.023932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.023942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.024160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.024190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.024409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.024440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.024619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.024649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.024959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.024969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.025250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.025260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.025394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.025404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.025602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.025631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.025824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.025853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.026112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.026141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.026361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.026392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.026639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.026669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.026896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.026905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.027049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.027059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.027288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.027299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.027565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.027574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.027788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.027797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.028068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.028078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.028260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.028270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.028443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.028472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.028760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.028789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.029109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.029139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.029388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.029419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.029606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.029636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.030076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.030111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.030429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.030460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.030730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.030740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.030893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.030903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.031111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.031121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.101 [2024-07-15 23:53:15.031390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.101 [2024-07-15 23:53:15.031420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.101 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.031669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.031699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.031894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.031924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.032214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.032268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.032455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.032485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.032718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.032728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.033042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.033051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.033175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.033185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.033383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.033393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.033551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.033562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.033685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.033695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.033915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.033926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.034062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.034072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.034201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.034211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.034369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.034380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.034617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.034627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.034755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.034764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.035009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.035019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.035162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.035183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.035434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.035446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.035692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.035701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.035978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.035988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.036110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.036120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.036316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.036326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.036522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.036532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.036709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.036729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.037002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.037013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.037209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.037218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.037483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.037493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.037687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.037696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.037911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.037920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.038195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.038204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.038345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.038356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.038497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.038507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.038663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.038674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.038817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.038828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.039036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.039048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.039236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.039246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.039512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.039522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.039705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.039716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.039858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.039868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.102 qpair failed and we were unable to recover it. 00:27:26.102 [2024-07-15 23:53:15.040081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.102 [2024-07-15 23:53:15.040092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.040296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.040307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.040463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.040474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.040622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.040632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.040849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.040861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.041122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.041133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.041324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.041334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.041464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.041474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.041672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.041682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.041957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.041968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.042157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.042168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.042317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.042328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.042439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.042449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.042572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.042582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.042786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.042797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.043013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.043024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.043272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.043283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.043532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.043542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.043664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.043674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.043861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.043872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.044058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.044068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.044319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.044331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.044482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.044493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.044647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.044658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.044808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.044818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.044950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.044962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.045085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.045095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.045397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.045408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.045618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.045629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.045829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.045839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.046043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.046055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.046324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.046335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.046484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.046494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.046689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.046699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.047024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.047036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.047241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.047251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.047534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.047544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.047695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.047705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.047847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.047857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.048086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.048096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.048363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.103 [2024-07-15 23:53:15.048394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.103 qpair failed and we were unable to recover it. 00:27:26.103 [2024-07-15 23:53:15.048632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.048662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.049996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.050018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.050263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.050274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.050554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.050585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.050831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.050860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.051094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.051123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.051355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.051386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.051586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.051616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.051858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.051868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.052140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.052150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.052408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.052418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.052548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.052558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.052701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.052710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.052974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.052984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.053184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.053193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.053393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.053403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.053562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.053572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.053724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.053734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.053927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.053938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.054121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.054131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.054292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.054328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.054604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.054621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.054823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.054845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.055063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.055081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.055283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.055299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.055507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.055521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.055683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.055697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.056051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.056065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.056200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.056213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.056419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.056433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.056586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.056599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.056865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.056880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.057080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.057101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.057410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.057424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.057640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.057654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.057844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.057858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.058066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.058079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.058332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.058346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.058616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.058631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.104 [2024-07-15 23:53:15.058789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.104 [2024-07-15 23:53:15.058810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.104 qpair failed and we were unable to recover it. 00:27:26.380 [2024-07-15 23:53:15.059081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.380 [2024-07-15 23:53:15.059098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.380 qpair failed and we were unable to recover it. 00:27:26.380 [2024-07-15 23:53:15.059395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.380 [2024-07-15 23:53:15.059410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.380 qpair failed and we were unable to recover it. 00:27:26.380 [2024-07-15 23:53:15.059616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.380 [2024-07-15 23:53:15.059630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.380 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.059776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.059789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.060004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.060018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.060326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.060340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.060602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.060615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.060914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.060931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.061201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.061216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.061438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.061452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.061670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.061684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.061908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.061922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.062163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.062176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.062396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.062410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.062553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.062566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.062823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.062836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.063091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.063105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.063302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.063316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.063528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.063543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.063816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.063830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.064115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.064128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.064313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.064327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.064494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.064508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.064762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.064775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.065051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.065065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.065216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.065236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.065408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.065422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.065661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.065675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.065973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.381 [2024-07-15 23:53:15.065987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.381 qpair failed and we were unable to recover it. 00:27:26.381 [2024-07-15 23:53:15.066281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.066295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.066551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.066565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.066758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.066771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.066992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.067005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.067207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.067220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.067434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.067450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.067654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.067667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.067908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.067922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.068063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.068076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.068292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.068306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.068461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.068474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.068680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.068694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.068905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.068919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.069189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.069203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.069431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.069445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.069637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.069650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.069895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.069908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.070185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.070198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.070478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.070492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.070656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.070670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.070938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.070951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.071172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.071185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.071401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.071415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.071690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.071704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.071909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.071923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.072122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.382 [2024-07-15 23:53:15.072135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.382 qpair failed and we were unable to recover it. 00:27:26.382 [2024-07-15 23:53:15.072339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.072353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.072629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.072642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.072939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.072953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.073254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.073268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.073415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.073429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.073675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.073689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.073924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.073940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.074154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.074167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.074371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.074385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.074604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.074617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.074767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.074781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.075069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.075083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.075384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.075398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.075628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.075641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.075848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.075861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.075992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.076005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.076283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.076297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.076531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.076545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.076752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.076766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.077008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.077022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.077293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.077307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.077460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.077473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.077674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.077687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.077889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.077902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.078097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.078110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.078361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.383 [2024-07-15 23:53:15.078375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.383 qpair failed and we were unable to recover it. 00:27:26.383 [2024-07-15 23:53:15.078683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.078697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.078897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.078910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.079134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.079151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.079353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.079368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.079564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.079581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.079795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.079808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.080106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.080126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.080356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.080371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.080506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.080516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.080720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.080730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.080967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.080977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.081310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.081320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.081444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.081454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.081657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.081666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.081934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.081944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.082058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.082068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.082340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.082350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.082487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.082497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.082792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.082801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.083018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.083027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.083283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.083293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.083567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.083577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.083780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.083790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.084091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.084101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.084237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.084247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.084495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.384 [2024-07-15 23:53:15.084505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.384 qpair failed and we were unable to recover it. 00:27:26.384 [2024-07-15 23:53:15.084771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.084781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.085099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.085109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.085245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.085255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.085413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.085423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.085672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.085682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.085816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.085826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.086139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.086150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.086416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.086427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.086564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.086576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.086724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.086734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.087049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.087058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.087188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.087197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.087379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.087391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.087540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.087550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.087698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.087707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.087927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.087938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.088072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.088082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.088368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.088378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.088520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.088530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.088679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.088689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.088911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.088922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.089201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.089211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.089474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.089484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.089616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.089626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.089768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.089777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.090068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.090079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.385 [2024-07-15 23:53:15.090277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.385 [2024-07-15 23:53:15.090287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.385 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.090534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.090544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.090690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.090700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.090908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.090918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.091162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.091173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.091401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.091411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.091672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.091682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.091876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.091886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.092104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.092115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.092419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.092430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.092581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.092591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.092793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.092803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.093070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.093080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.093328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.093339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.093476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.093486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.093732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.093742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.093954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.093964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.094277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.094288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.094440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.094450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.094651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.094660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.094859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.094869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.095128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.095138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.095371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.095384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.095576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.095587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.095777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.095787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.096073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.096083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.096296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.386 [2024-07-15 23:53:15.096306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.386 qpair failed and we were unable to recover it. 00:27:26.386 [2024-07-15 23:53:15.096553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.096563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.096845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.096855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.097053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.097063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.097318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.097329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.097478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.097488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.097775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.097785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.098039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.098049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.098253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.098263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.098398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.098408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.098554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.098565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.098712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.098722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.098987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.098997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.099273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.099284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.099475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.099486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.099698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.099708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.099937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.099947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.100220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.100234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.100445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.100455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.100597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.100607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.100831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.100841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.101124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.101134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.101350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.387 [2024-07-15 23:53:15.101361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.387 qpair failed and we were unable to recover it. 00:27:26.387 [2024-07-15 23:53:15.101498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.101509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.101733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.101743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.101956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.101966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.102107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.102117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.102374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.102385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.102522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.102532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.102782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.102792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.102983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.102993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.103258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.103269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.103483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.103493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.103639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.103649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.103861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.103871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.104071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.104082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.104283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.104299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.104480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.104490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.104697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.104708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.104856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.104867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.105060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.105070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.105344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.105354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.105545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.105554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.105698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.105708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.105953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.105963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.106157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.106168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.106369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.106381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.106518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.106528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.106729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.106741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.107060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.107071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.107218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.107232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.107400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.107410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.107622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.107632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.107769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.107779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.107999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.108010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.108215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.108230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.108485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.108496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.108626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.108636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.108791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.108801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.109005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.109015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.109307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.388 [2024-07-15 23:53:15.109318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.388 qpair failed and we were unable to recover it. 00:27:26.388 [2024-07-15 23:53:15.109521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.109531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.109671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.109681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.109933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.109943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.110194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.110205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.110432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.110443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.110628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.110638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.110844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.110854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.111049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.111060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.111280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.111291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.111486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.111496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.111687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.111697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.112023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.112035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.112282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.112293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.112463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.112474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.112686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.112697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.112837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.112849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.113075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.113085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.113291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.113301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.113453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.113463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.113596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.113606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.113759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.113770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.113899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.113910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.114107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.114117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.114308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.114318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.114458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.114469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.114660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.114671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.114871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.114881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.115164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.115174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.115362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.115373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.115569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.115580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.115850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.115861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.116067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.116077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.116313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.116324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.116458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.116468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.116678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.116688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.116821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.116831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.117095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.117105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.117329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.117340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.117541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.117551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.117763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.389 [2024-07-15 23:53:15.117773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.389 qpair failed and we were unable to recover it. 00:27:26.389 [2024-07-15 23:53:15.118000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.118010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.118273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.118283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.118417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.118427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.118558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.118568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.118690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.118701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.118984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.118993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.119181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.119191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.119403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.119413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.119661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.119672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.119873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.119883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.120020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.120030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.120156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.120166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.120379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.120389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.120538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.120548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.120696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.120706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.120891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.120903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.121175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.121187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.121496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.121507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.121656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.121666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.121821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.121830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.122082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.122091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.122221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.122236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.122374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.122383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.122503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.122512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.122713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.122723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.123036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.123047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.123229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.123240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.123399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.123410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.123548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.123559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.123764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.123773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.123990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.124000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.124210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.124220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.124388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.124398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.124539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.124549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.124692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.124702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.124822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.124832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.125050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.125060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.125267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.125277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.125459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.125469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.125606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.125616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.390 qpair failed and we were unable to recover it. 00:27:26.390 [2024-07-15 23:53:15.125738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.390 [2024-07-15 23:53:15.125748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.125963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.125973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.126267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.126278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.126432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.126442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.126636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.126647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.126787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.126797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.127117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.127127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.127320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.127330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.127474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.127484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.127626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.127636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.127835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.127845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.128122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.128132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.128318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.128328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.128473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.128484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.128734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.128745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.128875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.128886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.129175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.129185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.129440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.129450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.129648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.129657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.129774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.129784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.130120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.130129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.130382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.130392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.130543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.130553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.130788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.130798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.130939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.130949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.131219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.131232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.131398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.131408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.131553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.131563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.131763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.131773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.131912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.131922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.132113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.132123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.132424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.132434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.132650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.132660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.132841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.132851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.133048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.133058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.133183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.133192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.133335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.133345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.133484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.133494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.391 [2024-07-15 23:53:15.133644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.391 [2024-07-15 23:53:15.133654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.391 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.133901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.133911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.134043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.134052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.134359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.134371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.134525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.134535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.134680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.134690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.134872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.134882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.135169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.135179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.135383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.135393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.135642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.135651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.135805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.135815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.136028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.136038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.136230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.136240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.136433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.136444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.136639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.136649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.136863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.136872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.137013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.137022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.137291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.137303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.137495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.137505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.137707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.137717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.137960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.137970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.138197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.138207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.138422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.138432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.138628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.138638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.138777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.138786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.139159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.139169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.139361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.139370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.139575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.139585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.139791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.139802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.140036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.140047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.140322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.140332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.140603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.140613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.140812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.140822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.141036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.141046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.141189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.141198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.141473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.141484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.392 qpair failed and we were unable to recover it. 00:27:26.392 [2024-07-15 23:53:15.141685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.392 [2024-07-15 23:53:15.141695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.141899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.141909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.142136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.142146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.142345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.142356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.142545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.142555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.142691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.142701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.142843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.142853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.142978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.142988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.143135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.143145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.143407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.143417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.143566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.143576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.143844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.143854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.143979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.143989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.144255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.144265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.144398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.144407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.144635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.144644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.144838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.144847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.145092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.145102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.145395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.145406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.145602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.145612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.145890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.145900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.146175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.146187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.146409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.146420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.146666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.146676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.146874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.146884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.147067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.147077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.147213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.147227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.147433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.147443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.147691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.147701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.147902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.147912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.148151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.148160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.148389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.148399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.148669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.148679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.148879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.148890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.149160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.149170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.149381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.149392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.149674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.149684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.149863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.149873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.150139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.150149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.150357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.150367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.393 qpair failed and we were unable to recover it. 00:27:26.393 [2024-07-15 23:53:15.150519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.393 [2024-07-15 23:53:15.150529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.150675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.150684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.150821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.150831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.151015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.151025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.151236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.151246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.151527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.151537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.151747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.151756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.152031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.152041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.152323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.152334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.152552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.152561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.152762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.152772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.153065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.153076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.153295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.153305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.153555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.153565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.153709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.153719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.153933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.153944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.154195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.154205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.154490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.154501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.154679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.154689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.154917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.154927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.155171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.155181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.155382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.155395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.155520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.155530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.155800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.155810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.156023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.156033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.156308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.156318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.156468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.156478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.156627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.156637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.156786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.156796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.157062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.157072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.157337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.157348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.157617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.157627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.157761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.157771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.157968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.157978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.158199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.158209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.158501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.158511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.158711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.158721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.158945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.158955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.159236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.159247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.159495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.159504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.394 qpair failed and we were unable to recover it. 00:27:26.394 [2024-07-15 23:53:15.159646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.394 [2024-07-15 23:53:15.159655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.159846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.159856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.160053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.160062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.160333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.160343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.160482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.160491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.160740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.160749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.160967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.160977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.161271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.161282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.161484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.161493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.161688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.161697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.161917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.161927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.162194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.162204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.162355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.162364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.162611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.162622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.162757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.162767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.162973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.162983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.163267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.163278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.163414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.163424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.163603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.163613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.163808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.163818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.164084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.164094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.164296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.164308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.164457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.164467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.164591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.164600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.164799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.164809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.165079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.165089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.165279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.165290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.165480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.165490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.165643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.165653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.165843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.165853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.166153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.166163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.166342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.166354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.166536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.166546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.166674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.166684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.166864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.166874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.167084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.167094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.167367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.167377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.167614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.167623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.167755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.167765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.168020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.395 [2024-07-15 23:53:15.168030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.395 qpair failed and we were unable to recover it. 00:27:26.395 [2024-07-15 23:53:15.168293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.168303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.168503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.168513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.168703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.168713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.168916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.168926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.169122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.169132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.169320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.169331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.169596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.169606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.169853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.169862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.170063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.170073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.170348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.170358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.170623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.170633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.170830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.170840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.170981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.170991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.171183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.171193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.171336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.171346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.171535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.171545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.171748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.171758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.171958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.171967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.172213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.172222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.172509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.172519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.172767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.172776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.172973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.172984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.173180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.173189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.173435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.173445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.173625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.173635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.173857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.173867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.174058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.174068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.174359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.174369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.174576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.174585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.174862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.174871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.175078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.175088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.175360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.175370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.175649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.175659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.175861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.396 [2024-07-15 23:53:15.175871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.396 qpair failed and we were unable to recover it. 00:27:26.396 [2024-07-15 23:53:15.176147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.176156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.176439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.176450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.176648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.176658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.176911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.176920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.177120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.177130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.177352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.177362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.177632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.177642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.177863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.177873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.178145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.178155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.178428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.178438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.178682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.178692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.178937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.178947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.179192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.179202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.179476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.179487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.179691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.179703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.179904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.179914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.180206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.180216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.180423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.180433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.180689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.180699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.180900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.180910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.181037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.181046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.181316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.181327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.181548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.181558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.181860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.181870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.182061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.182071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.182352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.182362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.182650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.182660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.182926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.182935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.183231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.183241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.183479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.183489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.183765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.183775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.184065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.184075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.184327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.184338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.184605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.184615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.184752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.184761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.184958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.184968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.185236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.185247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.185454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.185463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.185654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.397 [2024-07-15 23:53:15.185664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.397 qpair failed and we were unable to recover it. 00:27:26.397 [2024-07-15 23:53:15.185912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.185922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.186132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.186142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.186341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.186351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.186647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.186656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.186908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.186918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.187117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.187126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.187351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.187361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.187547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.187556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.187853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.187863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.188137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.188147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.188347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.188357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.188564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.188574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.188768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.188777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.188925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.188934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.189180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.189190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.189450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.189462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.189724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.189734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.189987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.189997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.190255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.190265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.190473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.190483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.190703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.190713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.190924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.190933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.191203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.191212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.191472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.191481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.191777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.191787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.192068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.192078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.192351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.192361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.192497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.192507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.192686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.192695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.192981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.192991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.193192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.193201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.193497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.193507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.193701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.193711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.193955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.193965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.194164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.194174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.194446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.194456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.194585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.194595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.194773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.194783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.195065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.195075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.195255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.398 [2024-07-15 23:53:15.195266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.398 qpair failed and we were unable to recover it. 00:27:26.398 [2024-07-15 23:53:15.195489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.195499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.195701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.195711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.196005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.196015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.196274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.196284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.196554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.196564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.196741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.196750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.197018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.197028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.197175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.197184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.197459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.197469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.197603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.197613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.197884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.197894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.198026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.198036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.198283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.198293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.198482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.198492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.198740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.198750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.198960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.198972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.199241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.199251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.199520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.199530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.199766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.199776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.199922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.199932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.200221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.200240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.200424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.200433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.200686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.200695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.200894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.200904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.201170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.201180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.201302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.201312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.201525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.201534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.201731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.201740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.202005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.202015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.202234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.202245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.202491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.202501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.202711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.202721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.202976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.202985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.203186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.203195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.203440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.203450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.203717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.203727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.203950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.203959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.204136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.204146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.204387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.204398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.204623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.399 [2024-07-15 23:53:15.204633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.399 qpair failed and we were unable to recover it. 00:27:26.399 [2024-07-15 23:53:15.204886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.204896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.205101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.205111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.205316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.205327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.205529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.205539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.205809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.205818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.206033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.206042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.206313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.206323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.206507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.206517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.206724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.206734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.206933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.206943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.207185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.207195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.207466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.207476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.207732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.207742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.208034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.208044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.208266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.208276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.208458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.208470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.208718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.208728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.208997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.209007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.209154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.209163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.209418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.209428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.209614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.209624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.209826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.209836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.210098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.210108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.210310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.210320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.210509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.210519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.210764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.210774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.210969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.210979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.211232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.211242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.211439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.211449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.211748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.211758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.211895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.211905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.212201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.212211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.212487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.212497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.212772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.212782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.212914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.212923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.400 qpair failed and we were unable to recover it. 00:27:26.400 [2024-07-15 23:53:15.213101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.400 [2024-07-15 23:53:15.213111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.213288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.213298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.213491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.213501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.213746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.213756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.214005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.214014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.214286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.214297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.214428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.214438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.214638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.214648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.214919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.214929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.215239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.215249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.215515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.215524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.215731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.215741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.216046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.216055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.216169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.216178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.216444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.216455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.216728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.216738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.216935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.216945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.217205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.217215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.217497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.217507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.217732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.217742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.217885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.217896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.218193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.218203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.218352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.218362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.218562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.218572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.218773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.218783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.219074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.219084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.219355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.219365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.219629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.219639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.219932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.219942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.220138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.220147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.220362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.220372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.220621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.220631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.220925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.220935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.221150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.221161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.221340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.401 [2024-07-15 23:53:15.221350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.401 qpair failed and we were unable to recover it. 00:27:26.401 [2024-07-15 23:53:15.221608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.221617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.221877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.221887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.222135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.222145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.222383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.222393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.222677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.222687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.222945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.222955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.223232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.223242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.223458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.223468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.223737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.223747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.223972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.223982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.224182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.224191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.224323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.224333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.224542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.224552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.224795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.224805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.225040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.225049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.225247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.225257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.225449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.225459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.225720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.225730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.225935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.225945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.226126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.226136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.226401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.226412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.226544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.226554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.226733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.226742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.226925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.226935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.227180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.227190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.227387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.227399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.402 qpair failed and we were unable to recover it. 00:27:26.402 [2024-07-15 23:53:15.227659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.402 [2024-07-15 23:53:15.227668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.227863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.227873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.228073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.228083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.228353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.228363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.228674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.228684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.228904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.228914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.229101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.229111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.229359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.229369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.229603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.229612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.229794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.229804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.230080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.230089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.230364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.230374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.230620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.230629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.230842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.230852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.231070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.231080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.231281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.231291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.231475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.231662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.231672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.231899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.231908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.232164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.232174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.232448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.232458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.232604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.232613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.232738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.232748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.232939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.232949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.233068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.233078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.233390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.233401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.403 qpair failed and we were unable to recover it. 00:27:26.403 [2024-07-15 23:53:15.233554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.403 [2024-07-15 23:53:15.233564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.233692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.233702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.233904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.233914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.234110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.234119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.234318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.234328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.234506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.234516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.234705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.234715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.234988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.234998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.235242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.235252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.235432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.235441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.235712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.235722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.235861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.235870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.235995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.236005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.236221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.236236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.236435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.236444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.236732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.236743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.236905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.236915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.237240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.237251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.237523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.237533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.237793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.237803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.237946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.237955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.238200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.238210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.238479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.238490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.238765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.238775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.238912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.238922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.239167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.239176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.239442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.239452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.404 qpair failed and we were unable to recover it. 00:27:26.404 [2024-07-15 23:53:15.239714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.404 [2024-07-15 23:53:15.239724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.239987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.239997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.240232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.240242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.240540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.240550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.240723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.240732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.240989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.240998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.241269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.241279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.241419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.241429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.241573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.241583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.241792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.241801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.241990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.242000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.242203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.242213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.242472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.242482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.242624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.242633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.242863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.242873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.243173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.243183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.243372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.243383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.243603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.243613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.243869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.243879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.244093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.244102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.244308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.244318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.244499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.244509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.244637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.244647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.244917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.244927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.245133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.245142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.245400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.245410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.245604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-07-15 23:53:15.245615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.405 qpair failed and we were unable to recover it. 00:27:26.405 [2024-07-15 23:53:15.245898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.245908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.246049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.246059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.246265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.246275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.246476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.246486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.246664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.246674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.246888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.246898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.247082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.247092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.247333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.247344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.247612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.247622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.247882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.247892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.248094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.248104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.248302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.248313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.248547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.248557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.248780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.248792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.249628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.249640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.249909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.249920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.250138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.250148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.250394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.250404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.250547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.250557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.250780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.250790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.250986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.250995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.406 [2024-07-15 23:53:15.251179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-07-15 23:53:15.251189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.406 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.251462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.251473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.251772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.251782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.252004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.252013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.252339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.252349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.252559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.252570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.252837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.252847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.253040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.253050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.253351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.253361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.253609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.253619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.253839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.253849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.254145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.254155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.254350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.254360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.254562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.254571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.254761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.254771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.255032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.255041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.255311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.255321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.255458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.255467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.255656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.255667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.255865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.255875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.256089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.256098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.256338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.256348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.256544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.256554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.256766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.256776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.257066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.257076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.257265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.257275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.407 qpair failed and we were unable to recover it. 00:27:26.407 [2024-07-15 23:53:15.257481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.407 [2024-07-15 23:53:15.257490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.257707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.257717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.257988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.257998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.258270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.258280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.258423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.258433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.258627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.258636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.258870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.258880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.259091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.259101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.259345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.259356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.259501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.259511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.259723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.259732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.260022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.260031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.260238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.260249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.260438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.260448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.260647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.260657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.260781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.260791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.261115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.261125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.261241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.261252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.261493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.261504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.261793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.261803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.262076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.262086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.262333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.262344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.262594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.262604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.262795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.262805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.263006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.263015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.263201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.263211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.263521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.408 [2024-07-15 23:53:15.263532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.408 qpair failed and we were unable to recover it. 00:27:26.408 [2024-07-15 23:53:15.263806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.263816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.263951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.263960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.264203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.264212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.264369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.264379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.264583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.264593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.264774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.264785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.265049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.265058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.265270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.265280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.265480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.265490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.265738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.265747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.265966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.265975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.266268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.266278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.266554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.266565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.266834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.266844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.267089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.267099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.267234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.267245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.267517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.267527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.267733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.267743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.268022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.268032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.268275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.268285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.268469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.268479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.268676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.268685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.268931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.268941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.269266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.269276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.269423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.269433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.269748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.269758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.270030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.409 [2024-07-15 23:53:15.270040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.409 qpair failed and we were unable to recover it. 00:27:26.409 [2024-07-15 23:53:15.270302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.270313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.270512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.270521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.270703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.270713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.270988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.270998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.271246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.271256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.271449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.271459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.271654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.271664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.271950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.271960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.272206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.272215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.272386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.272421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.272740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.272756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.273037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.273051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.273354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.273368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.273663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.273676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.273834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.273848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.274167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.274180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.274374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.274388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.274521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.274535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.274723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.274741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.275023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.275036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.275176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.275189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.275462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.275475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.275730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.275743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.275878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.275891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.276111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.276124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.276367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.276380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.276651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.276665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.276901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.276915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.277214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.277233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.277467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.410 [2024-07-15 23:53:15.277480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.410 qpair failed and we were unable to recover it. 00:27:26.410 [2024-07-15 23:53:15.277735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.277748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.278030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.278044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.278195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.278209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.278421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.278436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.278706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.278719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.278974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.278988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.279221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.279239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.279540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.279554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.279837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.279851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.280055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.280068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.280267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.280281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.280408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.280421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.280630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.280643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.280833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.280846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.281126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.281139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.281418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.281430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.281700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.281709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.281852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.281862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.282133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.282143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.282407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.282417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.282693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.282703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.282950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.282960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.283159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.283168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.283426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.283436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.283729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.283739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.283939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.283949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.284074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.284083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.284299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.284309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.284504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.284516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.284807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.284816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.285024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.285034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.285302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.285312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.285615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.285625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.285889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.285899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.286164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.286174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.286356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.286366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.411 [2024-07-15 23:53:15.286586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.411 [2024-07-15 23:53:15.286595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.411 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.286778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.286788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.287044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.287054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.287301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.287311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.287556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.287566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.287752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.287762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.287950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.287960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.288237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.288247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.288494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.288504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.288682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.288691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.288875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.288885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.289129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.289139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.289271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.289281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.289467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.289476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.289746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.289756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.289895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.289905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.290088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.290098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.290314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.290324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.290518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.290527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.290801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.290811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.290996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.291006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.291188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.291199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.291476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.291486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.291755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.291764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.291935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.291945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.292165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.292174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.292435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.292445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.292638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.292648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.292942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.292952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.293206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.293216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.293435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.293445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.293743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.293753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.294029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.294040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.294254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.294265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.294539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.412 [2024-07-15 23:53:15.294550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.412 qpair failed and we were unable to recover it. 00:27:26.412 [2024-07-15 23:53:15.294678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.294688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.294886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.294896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.295165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.295175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.295355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.295366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.295668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.295677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.295829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.295838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.296027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.296037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.296238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.296249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.296506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.296516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.296715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.296725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.296975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.296985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.297257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.297267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.297405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.297414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.297595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.297605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.297752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.297762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.298026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.298036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.298235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.298245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.298533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.298543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.298790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.298800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.299048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.299058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.299260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.299270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.299467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.299477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.299723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.299732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.299979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.299989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.300259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.300271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.300479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.300489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.300690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.300700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.300954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.300964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.301217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.301231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.301509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.301518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.301788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.301798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.301936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.301946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.302239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.302249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.302387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.302396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.302576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.302586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.302772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.302782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.303028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.303038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.303284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.303295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.303437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.303447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.303644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.413 [2024-07-15 23:53:15.303653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.413 qpair failed and we were unable to recover it. 00:27:26.413 [2024-07-15 23:53:15.303897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.303907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.304166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.304175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.304443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.304454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.304639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.304648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.304869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.304879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.305149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.305159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.305355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.305366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.305480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.305490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.305761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.305771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.305958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.305968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.306235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.306245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.306381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.306391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.306644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.306654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.306916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.306925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.307136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.307145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.307355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.307365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.307680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.307689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.307879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.307889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.308146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.308156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.308424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.308434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.308588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.308597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.308919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.308928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.309196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.309206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.309403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.309414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.309541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.309554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.309836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.309846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.310172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.414 [2024-07-15 23:53:15.310181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.414 qpair failed and we were unable to recover it. 00:27:26.414 [2024-07-15 23:53:15.310463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.310473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.310624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.310634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.310904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.310914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.311211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.311220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.311378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.311388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.311687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.311697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.311892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.311901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.312100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.312110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.312368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.312378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.312616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.312626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.312876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.312886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.313158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.313168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.313441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.313451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.313651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.313661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.313887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.313897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.314141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.314151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.314421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.314432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.314646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.314655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.314834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.314844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.315121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.315130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.315278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.315288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.315473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.315483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.315668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.315677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.315793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.315802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.316008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.316018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.316262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.415 [2024-07-15 23:53:15.316272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.415 qpair failed and we were unable to recover it. 00:27:26.415 [2024-07-15 23:53:15.316399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.316409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.316654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.316664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.316932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.316942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.317108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.317117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.317387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.317397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.317640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.317650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.317897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.317907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.318107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.318117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.318391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.318401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.318659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.318669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.318848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.318858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.319103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.319115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.319332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.319342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.319630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.319639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.319820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.319829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.320097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.320106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.320382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.320392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.320657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.320667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.320870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.320879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.321091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.321100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.321366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.321376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.321624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.321633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.321836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.321846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.321971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.321980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.322106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.322437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.322448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.322736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.416 [2024-07-15 23:53:15.322746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.416 qpair failed and we were unable to recover it. 00:27:26.416 [2024-07-15 23:53:15.323033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.323043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.323238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.323248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.323548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.323557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.323812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.323822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.324092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.324101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.324253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.324264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.324455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.324465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.324732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.324742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.324915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.324925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.325170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.325180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.325372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.325382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.325582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.325592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.325836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.325846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.326023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.326032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.326150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.326159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.326428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.326439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.326659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.326669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.326943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.326953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.327131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.327140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.327290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.327301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.327496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.327506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.327706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.327716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.327856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.327866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.328138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.328148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.328350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.328362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.328561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.417 [2024-07-15 23:53:15.328571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.417 qpair failed and we were unable to recover it. 00:27:26.417 [2024-07-15 23:53:15.328685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.328694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.328893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.328903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.329149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.329159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.329305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.329315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.329517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.329526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.329772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.329781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.330055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.330064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.330257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.330268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.330401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.330411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.330611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.330621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.330866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.330877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.331009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.331018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.331289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.331300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.331572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.331582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.331761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.331771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.331904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.331914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.332189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.332199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.332404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.332414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.332700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.332709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.418 qpair failed and we were unable to recover it. 00:27:26.418 [2024-07-15 23:53:15.332988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.418 [2024-07-15 23:53:15.332997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.333198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.333208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.333345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.333355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.333489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.333498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.333754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.333764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.333946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.333956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.334082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.334092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.334338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.334348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.334534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.334544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.334803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.334813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.335022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.335032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.335212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.335222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.335438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.335448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.335632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.335641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.335801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.335811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.419 [2024-07-15 23:53:15.336008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.419 [2024-07-15 23:53:15.336018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.419 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.336198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.336208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.336347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.336357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.336603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.336613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.336799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.336811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.336992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.337002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.337254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.337264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.337445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.337454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.337657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.337667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.337888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.337898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.338083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.338093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.338282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.338292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.338523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.338533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.338748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.338758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.339027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.339037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.339315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.339325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.339464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.339473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.339741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.339750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.340001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.340011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.340263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.340273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.340455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.340465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.340739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.340749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.341017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.341027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.341174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.341184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.341299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.341309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.341586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.341596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.341804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.341814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.342092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.342101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.342376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.342386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.342637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.342647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.342806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.342816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.343011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.698 [2024-07-15 23:53:15.343021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.698 qpair failed and we were unable to recover it. 00:27:26.698 [2024-07-15 23:53:15.343249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.343259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.343508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.343518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.343695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.343704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.343925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.343934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.344205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.344215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.344352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.344362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.344498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.344507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.344776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.344786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.344981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.344991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.345237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.345248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.345367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.345377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.345575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.345585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.345860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.345871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.346149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.346159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.346383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.346394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.346604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.346614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.346809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.346819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.346952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.346961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.347159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.347168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.347356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.347366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.347546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.347556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.347736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.347745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.347941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.347951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.348151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.348161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.348298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.348309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.348604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.348614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.348801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.348811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.349078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.349087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.349358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.349368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.349660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.349670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.349931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.349940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.350242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.350252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.350396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.350405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.350585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.350594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.350774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.350784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.350981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.350991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.351208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.351218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.351406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.351416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.351666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.351675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.351904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.351914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.699 [2024-07-15 23:53:15.352161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.699 [2024-07-15 23:53:15.352171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.699 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.352401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.352411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.352535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.352545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.352744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.352754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.352999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.353009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.353259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.353269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.353514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.353523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.353658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.353668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.353912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.353922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.354190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.354200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.354337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.354347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.354529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.354539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.354732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.354746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.354930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.354940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.355051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.355061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.355328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.355339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.355538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.355547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.355807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.355817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.356030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.356040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.356263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.356273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.356491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.356500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.356745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.356755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.356983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.356993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.357206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.357215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.357354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.357364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.357562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.357572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.357824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.357834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.358041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.358051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.358364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.358375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.358652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.358662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.358936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.358945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.359123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.359133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.359394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.359404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.359608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.359618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.359920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.359930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.360198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.360208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.360404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.360414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.360633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.360642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.360911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.360921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.361148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.361157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.361433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.361443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.700 [2024-07-15 23:53:15.361716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.700 [2024-07-15 23:53:15.361726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.700 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.361955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.361965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.362243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.362253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.362437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.362447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.362739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.362749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.363028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.363038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.363306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.363317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.363539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.363549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.363744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.363754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.364052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.364062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.364328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.364338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.364540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.364551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.364745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.364755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.364972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.364982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.365172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.365182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.365374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.365384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.365631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.365641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.365935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.365945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.366233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.366243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.366421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.366431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.366700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.366710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.367010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.367020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.367138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.367148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.367418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.367428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.367707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.367716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.367853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.367863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.368090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.368100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.368359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.368370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.368550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.368559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.368784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.368793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.369141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.369151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.369352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.369362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.369581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.369591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.369786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.369796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.370071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.370081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.370264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.370274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.370496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.370506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.370774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.370783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.370981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.370991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.371262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.371272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.371559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.371569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.701 qpair failed and we were unable to recover it. 00:27:26.701 [2024-07-15 23:53:15.371765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.701 [2024-07-15 23:53:15.371775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.371970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.371980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.372241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.372251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.372518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.372528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.372729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.372738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.372876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.372885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.373019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.373029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.373210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.373220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.373469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.373479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.373745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.373755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.374016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.374027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.374206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.374216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Write completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 Read completed with error (sct=0, sc=8) 00:27:26.702 starting I/O failed 00:27:26.702 [2024-07-15 23:53:15.374520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.702 [2024-07-15 23:53:15.374747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.374779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.374989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.375004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.375209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.375222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.375486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.375499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.375800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.375814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.375955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.375969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.376252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.376266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.376498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.376511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.376700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.376714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.376943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.376956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.702 [2024-07-15 23:53:15.377236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.702 [2024-07-15 23:53:15.377250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.702 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.377400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.377414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.377671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.377684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.377971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.377984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.378138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.378152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.378339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.378352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.378650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.378663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.378946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.378959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.379182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.379196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.379426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.379441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.379732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.379745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.380021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.380035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.380289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.380303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.380563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.380577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.380852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.380865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.381049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.381062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.381288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.381302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.381494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.381507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.381735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.381748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.382026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.382039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.382252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.382266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.382541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.382555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.382797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.382820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.383111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.383125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.383335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.383351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.383623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.383636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.383867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.383881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.384170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.384183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.384464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.384477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.384707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.384720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.384984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.384997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.385214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.385231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.385436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.385449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.385727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.385740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.385991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.386004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.386286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.386305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.386511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.386525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.386726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.386739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.386995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.387008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.387321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.387335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.387473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.387486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.703 [2024-07-15 23:53:15.387737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.703 [2024-07-15 23:53:15.387750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.703 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.388028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.388041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.388188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.388201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.388417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.388431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.388695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.388709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.388987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.389000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.389265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.389279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.389484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.389497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.389754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.389767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.390050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.390063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.390341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.390354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.390644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.390658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.390857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.390870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.391147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.391161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.391372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.391386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.391668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.391682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.391826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.391840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.392054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.392068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.392346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.392360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.392480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.392494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.392776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.392789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.392990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.393008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.393339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.393354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.393624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.393637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.393912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.393930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.394136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.394149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.394452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.394467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.394727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.394747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.395050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.395062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.395309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.395320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.395518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.395528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.395728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.395738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.395936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.395946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.396201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.396210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.396430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.396440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.396738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.396747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.397015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.397024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.397229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.397239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.397501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.397511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.397800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.397810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.398065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.398075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.704 [2024-07-15 23:53:15.398254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.704 [2024-07-15 23:53:15.398264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.704 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.398481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.398490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.398697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.398707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.398928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.398938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.399209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.399219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.399492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.399502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.399748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.399758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.400028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.400039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.400309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.400319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.400503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.400513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.400758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.400768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.401037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.401046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.401237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.401247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.401380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.401389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.401656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.401666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.401946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.401956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.402137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.402146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.402438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.402448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.402724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.402734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.403011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.403021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.403295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.403307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.403514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.403524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.403712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.403722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.403964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.403974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.404246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.404257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.404515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.404525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.404769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.404778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.405022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.405031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.405307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.405317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.405503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.405513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.405809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.405819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.406068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.406078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.406268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.406278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.406521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.406531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.406724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.406733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.406965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.406975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.407173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.407183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.407458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.407468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.407741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.407751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.408027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.408036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.408311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.408321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.705 [2024-07-15 23:53:15.408564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.705 [2024-07-15 23:53:15.408574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.705 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.408698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.408708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.408986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.408995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.409193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.409202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.409472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.409483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.409671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.409681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.409903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.409913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.410162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.410172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.410357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.410367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.410545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.410555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.410767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.410777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.411035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.411045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.411242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.411252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.411450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.411460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.411715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.411724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.411913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.411923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.412141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.412150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.412424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.412434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.412632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.412642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.412910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.412921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.413201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.413211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.413434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.413444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.413629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.413639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.413932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.413942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.414135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.414144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.414337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.414348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.414616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.414626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.414896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.414906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.415179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.415189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.415373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.415383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.415629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.415639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.415914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.415924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.416197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.416206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.416480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.416490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.706 qpair failed and we were unable to recover it. 00:27:26.706 [2024-07-15 23:53:15.416760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.706 [2024-07-15 23:53:15.416770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.416970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.416980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.417272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.417283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.417527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.417537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.417788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.417798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.417930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.417939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.418186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.418196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.418470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.418480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.418753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.418763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.418952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.418961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.419149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.419159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.419410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.419420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.419693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.419703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.419972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.419982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.420237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.420247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.420447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.420457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.420677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.420687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.420886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.420896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.421168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.421178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.421464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.421474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.421654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.421664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.421864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.421874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.422087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.422096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.422366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.422377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.422588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.422598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.422800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.422813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.423036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.423046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.423320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.423330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.423509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.423519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.423790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.423799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.424005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.424015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.424146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.424156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.424333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.424351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.424597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.424607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.424830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.424840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.425035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.425045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.425242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.425252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.425398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.425408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.425703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.425712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.425983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.425992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.426176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.426185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.707 qpair failed and we were unable to recover it. 00:27:26.707 [2024-07-15 23:53:15.426435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.707 [2024-07-15 23:53:15.426445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.426711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.426720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.426970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.426980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.427258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.427267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.427459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.427469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.427669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.427678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.427940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.427949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.428170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.428180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.428364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.428374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.428647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.428657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.428787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.428797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.428943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.428953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.429155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.429165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.429413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.429423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.429699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.429708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.429890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.429900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.430170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.430180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.430427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.430437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.430570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.430580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.430851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.430861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.431107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.431117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.431299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.431309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.431576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.431586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.431853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.431863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.432111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.432123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.432374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.432384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.432656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.432665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.432914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.432924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.433171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.433180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.433372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.433382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.433653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.433662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.433913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.433923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.434119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.434128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.434311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.434321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.434567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.434577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.434768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.434778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.435048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.435058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.435238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.435248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.435497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.435506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.435772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.435782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.435960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.708 [2024-07-15 23:53:15.435970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.708 qpair failed and we were unable to recover it. 00:27:26.708 [2024-07-15 23:53:15.436111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.436121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.436311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.436321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.436584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.436594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.436861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.436870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.437163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.437173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.437437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.437448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.437590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.437599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.437874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.437884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.438173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.438182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.438315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.438325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.438550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.438560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.438746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.438755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.438975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.438985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.439257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.439268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.439536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.439546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.439745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.439755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.440044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.440053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.440301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.440311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.440492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.440502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.440712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.440722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.440966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.440976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.441273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.441283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.441533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.441542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.441734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.441746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.442017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.442026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.442246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.442256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.442555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.442564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.442833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.442843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.443098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.443107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.443353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.443363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.443633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.443643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.443914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.443924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.444117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.444128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.444318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.444328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.444512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.444522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.444725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.444734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.444953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.444963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.445111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.445121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.445420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.445430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.445556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.445565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.445697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.709 [2024-07-15 23:53:15.445707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.709 qpair failed and we were unable to recover it. 00:27:26.709 [2024-07-15 23:53:15.445971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.445981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.446236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.446246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.446540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.446549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.446826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.446836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.447049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.447058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.447279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.447290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.447470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.447480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.447610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.447620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.447803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.447812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.448011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.448021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.448160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.448170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.448466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.448476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.448702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.448712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.448957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.448966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.449166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.449176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.449368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.449378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.449669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.449679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.449815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.449824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.449973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.449983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.450176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.450186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.450402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.450412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.450595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.450605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.450821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.450832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.451026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.451036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.451331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.451341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.451531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.451541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.451731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.451741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.451922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.451932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.452083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.452092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.452289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.452299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.452490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.452500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.452751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.452761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.452943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.452954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.453167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.453177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.453370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.453380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.453675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.453685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.453937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.453947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.454140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.454149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.454365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.454375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.454575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.454585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.454847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.710 [2024-07-15 23:53:15.454857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.710 qpair failed and we were unable to recover it. 00:27:26.710 [2024-07-15 23:53:15.455060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.455069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.455267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.455277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.455527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.455537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.455809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.455819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.455960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.455970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.456161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.456170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.456454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.456464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.456596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.456606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.456881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.456891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.457073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.457082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.457380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.457390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.457530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.457540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.457809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.457819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.458064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.458074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.458329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.458339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.458535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.458546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.458813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.458822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.459071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.459080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.459346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.459357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.459499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.459508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.459642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.459652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.459862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.459873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.460055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.460065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.460323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.460332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.460580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.460590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.460841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.460851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.460998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.461008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.461191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.461200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.461330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.461340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.461459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.461469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.461586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.461596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.461749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.461758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.461979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.461989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.462178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.462187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.462367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.711 [2024-07-15 23:53:15.462377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.711 qpair failed and we were unable to recover it. 00:27:26.711 [2024-07-15 23:53:15.462573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.462583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.462704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.462714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.462895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.462904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.463103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.463113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.463379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.463389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.463693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.463702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.463922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.463931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.464178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.464187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.464376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.464386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.464659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.464668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.464904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.464913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.465183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.465192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.465460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.465470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.465652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.465662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.465931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.465941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.466190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.466200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.466467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.466478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.466660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.466670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.466864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.466874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.467066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.467076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.467348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.467358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.467677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.467686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.467953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.467963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.468236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.468247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.468400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.468410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.468679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.468688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.468934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.468947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.469140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.469150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.469337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.469347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.469565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.469574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.469768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.469778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.470026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.470036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.470182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.470192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.470454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.470464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.470608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.470618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.470913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.470923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.471168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.471179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.471312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.471322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.471514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.471524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.471715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.471725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.712 [2024-07-15 23:53:15.472034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.712 [2024-07-15 23:53:15.472045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.712 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.472164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.472174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.472317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.472328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.472574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.472585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.472713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.472723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.472926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.472937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.473245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.473256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.473394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.473403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.473538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.473548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.473776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.473787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.473926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.473936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.474140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.474151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.474480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.474490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.474672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.474682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.474865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.474875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.475060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.475070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.475201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.475211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.475425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.475435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.475638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.475648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.475876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.475888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.476097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.476106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.476360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.476370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.476553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.476563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.476698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.476708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.476987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.476997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.477110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.477120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.477301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.477314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.477496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.477505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.477782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.477792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.478049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.478059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.478358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.478368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.478508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.478517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.478696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.478706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.478979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.478990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.479117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.479127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.479338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.479348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.479570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.479581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.479762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.479772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.480017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.480027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.480300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.480310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.480560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.480570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.713 [2024-07-15 23:53:15.480867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.713 [2024-07-15 23:53:15.480876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.713 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.481171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.481181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.481462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.481472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.481618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.481628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.481832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.481842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.482089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.482099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.482317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.482328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.482511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.482522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.482659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.482669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.482810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.482820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.483036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.483046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.483249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.483259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.483530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.483542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.483728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.483739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.483994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.484004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.484306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.484317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.484532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.484542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.484676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.484686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.484802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.484812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.485108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.485118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.485373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.485383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.485555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.485565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.485746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.485756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.485964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.485974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.486114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.486124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.486306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.486316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.486504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.486514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.486768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.486778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.486986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.486996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.487186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.487196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.487458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.487468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.487600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.487610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.487791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.487801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.488005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.488015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.488264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.488274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.488573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.488583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.488829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.488838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.489136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.489146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.489360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.489370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.489579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.489589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.489806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.489816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.714 [2024-07-15 23:53:15.489995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.714 [2024-07-15 23:53:15.490005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.714 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.490261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.490272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.490402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.490412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.490606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.490616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.490817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.490827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.491025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.491035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.491286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.491297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.491569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.491579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.491708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.491718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.491912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.491922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.492083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.492093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.492290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.492303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.492520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.492530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.492737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.492747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.493019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.493029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.493332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.493343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.493597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.493607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.493814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.493824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.494094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.494103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.494214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.494228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.494456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.494466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.494652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.494661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.494888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.494898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.495013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.495023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.495205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.495215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.495399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.495409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.495674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.495684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.495886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.495895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.496095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.496105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.496237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.496247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.496440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.496450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.496638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.496648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.496827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.496836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.497039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.497049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.497264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.497275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.715 qpair failed and we were unable to recover it. 00:27:26.715 [2024-07-15 23:53:15.497380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.715 [2024-07-15 23:53:15.497390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.497524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.497534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.497725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.497735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.497960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.497970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.498096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.498107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.498324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.498335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.498525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.498535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.498794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.498804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.498985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.498995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.499209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.499220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.499423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.499434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.499570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.499580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.499799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.499809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.499986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.499996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.500197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.500207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.500354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.500365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.500561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.500573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.500691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.500702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.500970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.500980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.501176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.501186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.501376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.501387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.501516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.501526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.501732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.501743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.501868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.501878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.502068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.502078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.502280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.502290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.502481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.502491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.502621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.502631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.502900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.502911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.503103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.503114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.503245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.503256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.503507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.503517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.503698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.503708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.503915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.503925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.504217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.504236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.504350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.504360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.504500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.504510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.504768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.504778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.505025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.505034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.505151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.505162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.505319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.716 [2024-07-15 23:53:15.505330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.716 qpair failed and we were unable to recover it. 00:27:26.716 [2024-07-15 23:53:15.505574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.505584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.505727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.505737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.505860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.505870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.505998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.506008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.506210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.506219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.506406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.506417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.506550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.506560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.506810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.506820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.507010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.507020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.507161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.507171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.507300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.507311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.507543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.507553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.507737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.507747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.507877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.507888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.508110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.508119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.508248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.508260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.508399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.508410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.508659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.508669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.508804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.508821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.509066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.509076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.509197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.509206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.509411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.509422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.509560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.509571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.509768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.509779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.509941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.509951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.510084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.510094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.510289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.510300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.510416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.510426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.510612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.510622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.510875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.510885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.511072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.511082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.511356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.511366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.511555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.511565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.511699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.511709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.511935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.511945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.512076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.512087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.512198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.512207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.512408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.512418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.512550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.512559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.512850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.512860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.513001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.717 [2024-07-15 23:53:15.513011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.717 qpair failed and we were unable to recover it. 00:27:26.717 [2024-07-15 23:53:15.513260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.513270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.513481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.513490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.513613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.513622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.513834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.513844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.514023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.514033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.514160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.514169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.514354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.514364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.514499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.514509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.514692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.514703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.514887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.514897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.515079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.515089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.515214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.515223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.515352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.515363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.515486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.515497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.515683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.515695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.515882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.515892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.516033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.516042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.516219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.516233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.516482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.516492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.516725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.516735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.516848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.516858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.517077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.517087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.517339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.517350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.517538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.517548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.517800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.517810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.517946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.517957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.518140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.518149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.518330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.518341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.518543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.518553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.518796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.518805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.519073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.519083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.519272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.519282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.519375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.519384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.519578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.519588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.519788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.519797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.519975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.519984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.520159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.520169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.520308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.520318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.520571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.520580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.520711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.520721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.520910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.718 [2024-07-15 23:53:15.520920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.718 qpair failed and we were unable to recover it. 00:27:26.718 [2024-07-15 23:53:15.521063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.521073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.521263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.521273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.521407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.521416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.521600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.521610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.521869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.521878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.522001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.522010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.522207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.522217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.522346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.522356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.522554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.522563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.522749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.522758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.522960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.522970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.523154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.523163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.523380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.523390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.523507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.523518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.523720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.523729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.523858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.523869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.524116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.524126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.524324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.524335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.524460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.524470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.524659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.524669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.524868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.524878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.525004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.525014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.525232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.525243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.525456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.525466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.525655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.525665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.525853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.525863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.525989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.525999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.526292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.526303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.526454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.526463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.526609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.526619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.526768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.526778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.526911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.526920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.527047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.527057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.527248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.719 [2024-07-15 23:53:15.527259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.719 qpair failed and we were unable to recover it. 00:27:26.719 [2024-07-15 23:53:15.527390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.527400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.527577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.527587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.527710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.527720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.527851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.527861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.527980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.527990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.528110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.528120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.528242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.528253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.528502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.528512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.528695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.528705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.528906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.528915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.529110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.529120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.529301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.529311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.529500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.529510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.529705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.529715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.529905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.529914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.530096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.530106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.530374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.530385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.530580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.530589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.530766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.530776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.530976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.530988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.531117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.531127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.531254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.531264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.531396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.531406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.531554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.531563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.531749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.531760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.531979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.531988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.532188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.532197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.532463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.532474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.532690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.532700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.532995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.533005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.533205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.533215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.533415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.533425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.533630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.533640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.533907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.533917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.534028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.534038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.534310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.534320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.534449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.534459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.534737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.534747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.534923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.534933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.535128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.535138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.535339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.720 [2024-07-15 23:53:15.535349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.720 qpair failed and we were unable to recover it. 00:27:26.720 [2024-07-15 23:53:15.535546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.535556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.535736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.535746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.535989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.535999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.536208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.536218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.536527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.536537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.536803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.536813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.537057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.537067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.537277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.537287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.537482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.537492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.537671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.537680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.537901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.537911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.538090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.538100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.538234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.538244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.538563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.538573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.538842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.538851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.539099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.539109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.539350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.539360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.539632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.539642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.539898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.539909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.540106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.540115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.540412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.540422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.540693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.540703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.540960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.540969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.541167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.541176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.541392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.541403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.541660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.541670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.541914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.541924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.542188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.542198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.542390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.542400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.542696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.542706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.542984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.542993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.543237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.543247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.543493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.543503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.543769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.543779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.544100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.544110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.544401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.544412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.544675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.544685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.544866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.544876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.545076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.545085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.545354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.545364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.545565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.721 [2024-07-15 23:53:15.545575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.721 qpair failed and we were unable to recover it. 00:27:26.721 [2024-07-15 23:53:15.545835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.545845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.546138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.546148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.546373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.546384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.546515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.546524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.546650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.546660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.546907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.546917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.547171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.547181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.547392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.547402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.547531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.547541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.547674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.547684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.547908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.547918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.548144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.548154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.548293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.548304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.548449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.548460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.548738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.548748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.548994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.549004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.549301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.549312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.549528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.549540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.549785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.549795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.549993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.550003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.550201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.550210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.550504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.550514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.550715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.550724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.550879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.550889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.551159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.551169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.551294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.551304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.551525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.551535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.551783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.551793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.552009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.552019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.552319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.552329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.552470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.552479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.552751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.552762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.552961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.552971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.553256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.553266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.553521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.553531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.553786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.553796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.554083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.554093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.554339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.554349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.554531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.554541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.554719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.722 [2024-07-15 23:53:15.554729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.722 qpair failed and we were unable to recover it. 00:27:26.722 [2024-07-15 23:53:15.554996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.555006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.555277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.555287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.555487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.555496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.555778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.555788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.556039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.556049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.556180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.556190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.556459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.556469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.556738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.556748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.556946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.556956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.557246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.557256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.557391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.557401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.557648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.557658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.557882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.557891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.558106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.558116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.558365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.558375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.558573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.558582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.558858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.558868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.559057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.559068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.559262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.559272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.559451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.559460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.559668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.559678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.559814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.559824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.559965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.559975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.560151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.560161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.560383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.560393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.560650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.560660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.560909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.560918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.561189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.561198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.561452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.561463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.561739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.561749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.562044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.562054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.562310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.562320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.562523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.562532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.562674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.562684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.562880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.562890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.723 [2024-07-15 23:53:15.563085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.723 [2024-07-15 23:53:15.563094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.723 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.563209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.563219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.563407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.563417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.563678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.563687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.563955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.563964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.564211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.564221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.564425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.564435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.564704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.564714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.564895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.564905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.565131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.565141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.565388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.565398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.565599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.565608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.565880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.565889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.566014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.566024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.566295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.566305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.566503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.566513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.566641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.566653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.566910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.566920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.567190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.567200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.567451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.567461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.567655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.567665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.567869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.567878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.568133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.568145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.568341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.568351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.568548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.568557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.568764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.568774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.568999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.569009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.569283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.569294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.569492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.569501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.569789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.569798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.569933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.569942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.570138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.570148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.570326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.570337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.570616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.570626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.570818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.570828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.571041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.571051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.571257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.571268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.571392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.571402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.571672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.571681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.571860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.571869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.724 [2024-07-15 23:53:15.572076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.724 [2024-07-15 23:53:15.572086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.724 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.572279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.572289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.572546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.572556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.572771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.572781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.572991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.573001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.573272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.573283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.573586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.573595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.573869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.573878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.574074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.574084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.574332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.574342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.574552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.574561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.574853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.574863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.575103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.575113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.575307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.575317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.575507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.575517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.575783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.575792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.576060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.576070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.576317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.576327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.576600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.576610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.576788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.576798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.577068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.577077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.577280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.577290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.577540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.577552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.577774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.577784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.578035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.578045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.578301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.578311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.578579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.578588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.578823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.578833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.578980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.578989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.579200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.579210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.579476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.579486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.579763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.579772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.580019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.580028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.580330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.580341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.580620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.580630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.580769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.580779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.581078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.581088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.581265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.581275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.581470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.581479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.581729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.581739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.582013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.582022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.725 qpair failed and we were unable to recover it. 00:27:26.725 [2024-07-15 23:53:15.582271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.725 [2024-07-15 23:53:15.582281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.582548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.582558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.582749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.582759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.582948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.582958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.583177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.583186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.583434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.583444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.583708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.583718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.583997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.584006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.584210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.584220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.584428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.584438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.584637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.584647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.584893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.584902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.585120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.585130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.585402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.585412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.585596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.585606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.585878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.585888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.586085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.586095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.586342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.586352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.586616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.586625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.586908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.586917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.587166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.587175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.587425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.587437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.587707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.587717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.587963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.587972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.588173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.588183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.588429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.588439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.588633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.588643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.588853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.588863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.589054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.589064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.589255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.589265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.589402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.589412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.589656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.589665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.589933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.589943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.590140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.590150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.590422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.590432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.590720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.590729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.590948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.590958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.591232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.591242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.591520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.591530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.591800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.591810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.592059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.592069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.726 qpair failed and we were unable to recover it. 00:27:26.726 [2024-07-15 23:53:15.592318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.726 [2024-07-15 23:53:15.592328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.592530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.592540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.592817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.592827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.593110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.593120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.593305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.593315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.593518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.593528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.593726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.593736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.593951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.593962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.594232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.594242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.594377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.594387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.594657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.594666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.594932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.594942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.595185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.595195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.595453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.595463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.595737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.595747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.596002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.596012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.596212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.596222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.596430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.596439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.596726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.596736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.597005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.597015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.597286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.597296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.597548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.597558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.597750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.597759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.598005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.598014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.598284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.598294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.598552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.598562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.598839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.598848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.599039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.599049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.599293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.599303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.599545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.599555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.599748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.599757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.600052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.600061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.600277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.600287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.600544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.600553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.600839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.600848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.601029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.601039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.601244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.601255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.601444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.601453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.601712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.601721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.601901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.601911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.602114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.602124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.727 qpair failed and we were unable to recover it. 00:27:26.727 [2024-07-15 23:53:15.602324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.727 [2024-07-15 23:53:15.602334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.602519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.602529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.602727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.602736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.603004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.603014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.603318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.603328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.603579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.603588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.603769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.603781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.604053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.604063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.604334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.604344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.604620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.604630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.604904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.604914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.605188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.605198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.605468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.605478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.605754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.605764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.606015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.606025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.606166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.606176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.606389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.606399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.606670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.606680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.606791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.606801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.607019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.607028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.607219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.607232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.607411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.607421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.607643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.607652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.607921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.607930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.608193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.608203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.608423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.608434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.608615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.608625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.608813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.608823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.609099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.609109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.609417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.609427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.609612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.609622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.609909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.609918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.610134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.610144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.610284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.610295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.610572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.610582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.728 [2024-07-15 23:53:15.610832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.728 [2024-07-15 23:53:15.610841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.728 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.611135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.611145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.611414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.611424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.611684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.611693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.611819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.611828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.612099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.612109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.612311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.612321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.612520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.612530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.612707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.612717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.612985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.612995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.613261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.613271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.613545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.613557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.613775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.613785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.613975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.613984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.614254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.614265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.614511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.614520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.614767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.614776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.615052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.615061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.615255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.615265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.615529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.615540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.615839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.615849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.616138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.616148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.616432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.616442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.616709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.616718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.616918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.616928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.617173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.617183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.617389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.617399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.617697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.617707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.617895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.617905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.618150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.618160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.618429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.618439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.618696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.618707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.618933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.618943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.619084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.619094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.619367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.619377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.619672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.619681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.619935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.619944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.620219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.620231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.620497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.620507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.620786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.620796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.621012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.621022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.729 [2024-07-15 23:53:15.621215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.729 [2024-07-15 23:53:15.621227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.729 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.621474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.621483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.621732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.621741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.621866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.621876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.622145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.622154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.622354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.622364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.622575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.622585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.622782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.622792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.622972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.622981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.623186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.623196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.623319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.623331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.623580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.623589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.623858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.623868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.624123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.624133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.624378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.624388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.624569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.624579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.624847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.624856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.625055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.625064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.625348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.625358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.625608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.625618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.625799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.625809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.626080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.626090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.626351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.626361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.626546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.626555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.626804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.626814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.627082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.627091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.627281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.627291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.627539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.627549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.627794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.627804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.627931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.627940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.628152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.628162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.628439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.628449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.628629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.628638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.628858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.628867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.629045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.629055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.629355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.629365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.629650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.629660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.629888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.629897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.630162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.630172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.630420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.630430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.630683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.630693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.730 [2024-07-15 23:53:15.630870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.730 [2024-07-15 23:53:15.630880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.730 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.631067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.631076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.631299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.631308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.631554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.631564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.631804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.631814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.632061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.632071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.632318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.632334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.632583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.632593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.632855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.632865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.633117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.633129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.633396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.633406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.633595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.633604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.633800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.633810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.634003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.634013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.634279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.634290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.634583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.634593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.634768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.634778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.635068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.635078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.635273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.635283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.635479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.635489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.635680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.635689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.635952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.635962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.636237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.636246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.636447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.636457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.636671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.636681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.636880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.636890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.637190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.637200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.637397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.637407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.637677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.637687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.637894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.637903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.638105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.638115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.638387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.638397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.638606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.638616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.638883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.638893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.639090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.639100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.639390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.639400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.639534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.639544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.639766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.639775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.640070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.640080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.640204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.640213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.640461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.640471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.731 [2024-07-15 23:53:15.640748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.731 [2024-07-15 23:53:15.640757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.731 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.640937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.640947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.641194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.641204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.641448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.641458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.641751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.641761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.642036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.642046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.642292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.642302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.642599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.642609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.642828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.642839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.643027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.643037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.643235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.643246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.643516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.643525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.643796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.643806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.643991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.644000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.644267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.644278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.644550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.644560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.644832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.644841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.645109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.645118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.645363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.645373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.645570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.645580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.645706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.645716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.645960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.645970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.646266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.646276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.646465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.646475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.646676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.646685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.646946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.646956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.647233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.647243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.647511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.647520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.647715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.647725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.647996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.648006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.648195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.648205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.648518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.648528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.648720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.648730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.649005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.649015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.732 [2024-07-15 23:53:15.649209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.732 [2024-07-15 23:53:15.649219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.732 qpair failed and we were unable to recover it. 00:27:26.733 [2024-07-15 23:53:15.649439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.733 [2024-07-15 23:53:15.649449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.733 qpair failed and we were unable to recover it. 00:27:26.733 [2024-07-15 23:53:15.649745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.733 [2024-07-15 23:53:15.649755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.733 qpair failed and we were unable to recover it. 00:27:26.733 [2024-07-15 23:53:15.649954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.733 [2024-07-15 23:53:15.649963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.733 qpair failed and we were unable to recover it. 00:27:26.733 [2024-07-15 23:53:15.650163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.733 [2024-07-15 23:53:15.650173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.733 qpair failed and we were unable to recover it. 00:27:26.733 [2024-07-15 23:53:15.650409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.733 [2024-07-15 23:53:15.650419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.733 qpair failed and we were unable to recover it. 00:27:26.733 [2024-07-15 23:53:15.650615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.733 [2024-07-15 23:53:15.650625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.733 qpair failed and we were unable to recover it. 00:27:26.733 [2024-07-15 23:53:15.650896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.733 [2024-07-15 23:53:15.650905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:26.733 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.651112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.651122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.651396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.651407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.651585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.651595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.651801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.651811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.652010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.652020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.652282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.652292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.652494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.652506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.652810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.652820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.653094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.653104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.653324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.653334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.653534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.653543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.653724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.653733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.653861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.653871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.654074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.654084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.654283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.654293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.654566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.654576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.654831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.654841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.655108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.655117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.655308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.655318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.655520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.655530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.655789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.655799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.656007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.656017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.656239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.656427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.656436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.656712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.656722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.656984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.656994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.657176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.657186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.657374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.657384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.657586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.657596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.657816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.657825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.658006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.658016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.010 qpair failed and we were unable to recover it. 00:27:27.010 [2024-07-15 23:53:15.658148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.010 [2024-07-15 23:53:15.658158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.658343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.658354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.658613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.658624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.658822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.658832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.658969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.658979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.659168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.659178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.659467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.659477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.659730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.659740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.659986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.659995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.660194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.660204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.660384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.660394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.660526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.660536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.660783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.660793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.660952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.660962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.661207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.661216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.661410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.661422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.661603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.661613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.661863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.661872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.662011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.662021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.662292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.662302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.662481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.662491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.662692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.662701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.662975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.662985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.663177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.663186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.663310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.663320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.663510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.663520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.663744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.663754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.664003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.664014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.664286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.664296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.664572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.664582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.664856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.664866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.665109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.665119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.665358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.665368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.665550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.665560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.665836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.665845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.666039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.666049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.666243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.666254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.666387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.666397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.666646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.666659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.666952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.666962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.667213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.011 [2024-07-15 23:53:15.667223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.011 qpair failed and we were unable to recover it. 00:27:27.011 [2024-07-15 23:53:15.667470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.667480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.667749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.667758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.668027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.668038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.668293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.668303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.668593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.668602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.668846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.668856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.669102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.669112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.669383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.669393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.669587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.669596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.669777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.669787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.670057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.670066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.670204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.670214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.670415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.670425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.670673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.670682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.670946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.670958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.671082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.671092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.671289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.671300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.671569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.671580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.671783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.671793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.671986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.671996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.672192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.672202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.672447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.672458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.672657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.672666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.672886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.672896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.673104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.673113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.673325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.673335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.673535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.673545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.673765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.673774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.674022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.674032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.674325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.674335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.674578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.674588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.674881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.674891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.675071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.675080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.675370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.675380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.675592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.675601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.675793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.675803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.676067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.676077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.676275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.676285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.676583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.676593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.676771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.676781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.677044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.012 [2024-07-15 23:53:15.677054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.012 qpair failed and we were unable to recover it. 00:27:27.012 [2024-07-15 23:53:15.677288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.677298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.677492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.677502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.677631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.677640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.677831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.677841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.678019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.678028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.678322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.678332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.678536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.678546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.678841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.678851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.679049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.679059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.679332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.679342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.679538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.679548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.679839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.679848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.680027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.680037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.680257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.680269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.680475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.680485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.680672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.680682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.680814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.680824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.681023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.681033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.681234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.681244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.681492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.681502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.681771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.681781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.682037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.682047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.682181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.682191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.682404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.682414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.682708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.682718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.682923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.682932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.683207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.683216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.683346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.683356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.683604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.683614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.683814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.683824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.684022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.684032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.684212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.684222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.684432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.684442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.684662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.684671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.684941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.684951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.685223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.685237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.685507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.685517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.685792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.685802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.686072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.686081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.686359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.686369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.013 [2024-07-15 23:53:15.686645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.013 [2024-07-15 23:53:15.686655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.013 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.686945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.686955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.687161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.687171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.687305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.687315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.687561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.687571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.687795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.687805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.687944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.687954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.688165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.688175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.688424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.688435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.688685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.688695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.688965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.688975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.689219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.689233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.689426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.689436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.689680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.689691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.689939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.689949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.690196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.690206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.690388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.690398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.690577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.690587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.690783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.690793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.690969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.690979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.691174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.691184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.691377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.691388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.691662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.691672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.691885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.691895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.692086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.692096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.692342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.692352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.692598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.692608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.692802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.692812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.693067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.693077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.693268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.693278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.693466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.693476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.693721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.693730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.014 qpair failed and we were unable to recover it. 00:27:27.014 [2024-07-15 23:53:15.693976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.014 [2024-07-15 23:53:15.693986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.694245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.694255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.694515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.694525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.694772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.694782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.695065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.695075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.695345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.695355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.695601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.695611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.695861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.695871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.696127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.696163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.696378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.696394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.696675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.696689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.696972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.696986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.697273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.697287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.697425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.697439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.697692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.697705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.697964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.697980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.698238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.698252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.698525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.698538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.698742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.698756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.698943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.698957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.699181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.699194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.699381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.699403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.699605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.699619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.699897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.699911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.700164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.700177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.700433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.700447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.700650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.700664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.700873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.700887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.701092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.701106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.701323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.701337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.701611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.701625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.701905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.701918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.702171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.702184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.702415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.702429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.702685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.702698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.702897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.702911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.703230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.703244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.703531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.703545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.703775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.703788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.703988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.704001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.704281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.704295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.015 qpair failed and we were unable to recover it. 00:27:27.015 [2024-07-15 23:53:15.704551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.015 [2024-07-15 23:53:15.704565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.704824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.704837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.705090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.705104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.705301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.705315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.705630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.705643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.705849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.705862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.706086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.706100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.706324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.706335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.706540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.706549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.706731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.706741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.707036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.707046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.707268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.707278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.707462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.707472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.707668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.707677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.707927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.707937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.708138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.708148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.708294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.708305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.708552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.708561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.708752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.708762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.709032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.709042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.709261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.709273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.709547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.709557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.709864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.709873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.709998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.710008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.710264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.710274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.710466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.710476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.710795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.710805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.710987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.710997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.711242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.711252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.711476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.711486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.711669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.711678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.711864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.711874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.712016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.712025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.712239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.712249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.712465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.712476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.712585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.712594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.712796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.712806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.713032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.713042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.713293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.713303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.713562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.713572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.016 qpair failed and we were unable to recover it. 00:27:27.016 [2024-07-15 23:53:15.713773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.016 [2024-07-15 23:53:15.713782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.713978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.713988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.714245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.714255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.714448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.714458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.714649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.714659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.714923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.714933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.715192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.715201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.715448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.715458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.715723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.715732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.715923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.715933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.716178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.716187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.716455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.716465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.716657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.716667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.716855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.716864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.717138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.717148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.717295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.717305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.717427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.717437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.717746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.717756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.718028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.718037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.718228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.718238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.718463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.718473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.718669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.718679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.718867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.718877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.719070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.719079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.719281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.719291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.719548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.719557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.719830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.719840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.720035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.720045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.720312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.720322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.720523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.720533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.720725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.720734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.720914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.720924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.721152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.721162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.721446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.721456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.721671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.721681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.721881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.721890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.722130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.722140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.722343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.722353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.722492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.722502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.722685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.722695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.722959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.722968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.017 qpair failed and we were unable to recover it. 00:27:27.017 [2024-07-15 23:53:15.723144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.017 [2024-07-15 23:53:15.723154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.723341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.723352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.723567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.723577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.723854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.723864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.724013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.724023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.724209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.724219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.724495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.724506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.724636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.724646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.724847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.724857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.725067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.725077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.725323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.725333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.725453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.725462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.725684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.725695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.725876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.725885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.726110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.726119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.726322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.726332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.726449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.726459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.726733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.726743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.726948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.726958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.727234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.727244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.727439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.727449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.727572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.727582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.727760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.727770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.728015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.728025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.728295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.728306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.728574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.728584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.728833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.728843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.729047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.729057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.729255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.729265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.729506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.729516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.729660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.729670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.729951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.729960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.730075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.730085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.730287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.018 [2024-07-15 23:53:15.730297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.018 qpair failed and we were unable to recover it. 00:27:27.018 [2024-07-15 23:53:15.730543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.730553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.730771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.730781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.731051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.731060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.731257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.731267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.731453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.731463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.731731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.731741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.732047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.732057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.732250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.732260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.732401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.732411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.732600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.732609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.732808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.732818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.732997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.733007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.733275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.733287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.733505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.733515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.733727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.733737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.733934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.733944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.734235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.734245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.734440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.734450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.734723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.734732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.734938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.734947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.735123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.735133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.735392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.735402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.735618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.735628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.735912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.735921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.736144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.736154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.736455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.736465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.736647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.736657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.736934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.736944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.737142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.737151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.737399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.737409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.737678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.737687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.737940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.737950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.738215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.738231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.738367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.738377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.738580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.738589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.738880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.738890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.739136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.739146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.739330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.739341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.739578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.739588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.739858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.739868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.740068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.019 [2024-07-15 23:53:15.740078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.019 qpair failed and we were unable to recover it. 00:27:27.019 [2024-07-15 23:53:15.740208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.740218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.740466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.740476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.740724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.740734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.740980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.740990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.741261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.741271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.741544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.741554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.741741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.741751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.741928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.741938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.742119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.742129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.742309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.742319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.742592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.742602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.742801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.742812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.743012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.743021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.743217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.743232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.743501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.743511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.743711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.743721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.743941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.743951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.744218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.744240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.744380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.744389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.744585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.744616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.744771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.744781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.744979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.744989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.745189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.745199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.745442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.745452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.745637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.745646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.745841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.745851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.746125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.746135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.746423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.746434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.746631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.746641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.746830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.746840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.747107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.747116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.747354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.747364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.747558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.747568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.747759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.747769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.747962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.747972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.748183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.748193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.748338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.748348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.748477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.748487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.748671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.748681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.748863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.748873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.020 [2024-07-15 23:53:15.749144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.020 [2024-07-15 23:53:15.749154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.020 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.749350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.749360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.749546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.749556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.749823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.749832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.750048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.750058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.750186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.750196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.750329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.750339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.750590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.750600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.750811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.750821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.751039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.751049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.751297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.751307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.751432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.751445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.751717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.751727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.751918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.751928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.752131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.752141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.752347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.752357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.752547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.752557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.752752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.752762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.753033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.753043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.753237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.753248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.753461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.753471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.753665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.753675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.753920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.753930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.754156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.754166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.754360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.754370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.754642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.754652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.754849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.754858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.755134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.755144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.755336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.755346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.755538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.755548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.755750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.755759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.756010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.756020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.756243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.756253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.756444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.756454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.756699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.756709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.756874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.756884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.757071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.757081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.757278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.757288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.757549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.757559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.757809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.757819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.758074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.758084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.021 qpair failed and we were unable to recover it. 00:27:27.021 [2024-07-15 23:53:15.758296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.021 [2024-07-15 23:53:15.758306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.758515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.758524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.758790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.758800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.759058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.759068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.759282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.759292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.759475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.759485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.759599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.759609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.759735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.759745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.759891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.759901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.760157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.760167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.760287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.760298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.760498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.760507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.760765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.760775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.760917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.760927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.761109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.761119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.761234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.761245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.761518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.761528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.761745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.761755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.762033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.762042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.762229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.762239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.762372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.762381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.762523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.762533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.762817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.762826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.763048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.763058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.763269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.763279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.763433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.763442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.763739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.763749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.764042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.764052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.764314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.764324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.764472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.764482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.764745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.764754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.765026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.765036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.765219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.765234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.765430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.765440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.765566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.765576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.765847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.765857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.766039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.766049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.766297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.766307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.766511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.766521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.766789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.766799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.767087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.022 [2024-07-15 23:53:15.767097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.022 qpair failed and we were unable to recover it. 00:27:27.022 [2024-07-15 23:53:15.767310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.767320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.767530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.767540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.767832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.767842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.768107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.768116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.768328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.768338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.768598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.768607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.768773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.768783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.768987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.768997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.769180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.769189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.769396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.769408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.769654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.769664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.769921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.769931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.770129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.770140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.770358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.770368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.770551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.770561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.770807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.770817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.771030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.771040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.771327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.771338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.771547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.771557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.771699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.771709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.771841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.771851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.772049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.772059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.772327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.772337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.772596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.772606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.772860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.772870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.773039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.773048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.773246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.773257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.773407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.773417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.773555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.773565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.773745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.773755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.773965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.773975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.774246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.774256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.774456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.774465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.774747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.774757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.774971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.023 [2024-07-15 23:53:15.774980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.023 qpair failed and we were unable to recover it. 00:27:27.023 [2024-07-15 23:53:15.775232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.775242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.775514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.775524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.775738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.775747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.775943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.775953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.776149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.776159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.776432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.776442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.776731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.776740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.776833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.776842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.777033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.777042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.777252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.777262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.777558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.777567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.777840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.777850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.777995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.778005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.778190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.778200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.778391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.778403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.778652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.778661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.778914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.778924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.779103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.779113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.779381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.779391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.779667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.779677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.779868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.779878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.780075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.780085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.780210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.780219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.780422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.780433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.780680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.780690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.780965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.780975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.781170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.781180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.781376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.781387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.781566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.781576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.781865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.781875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.782145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.782155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.782380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.782390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.782578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.782587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.782784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.782794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.783042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.783052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.783251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.783261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.783454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.783464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.783743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.783752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.784052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.784061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.784292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.784302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.024 qpair failed and we were unable to recover it. 00:27:27.024 [2024-07-15 23:53:15.784500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.024 [2024-07-15 23:53:15.784510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.784691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.784700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.784885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.784894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.785038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.785048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.785322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.785332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.785448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.785458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.785668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.785678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.785928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.785937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.786209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.786219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.786368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.786379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.786570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.786579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.786752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.786761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.787012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.787022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.787234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.787245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.787517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.787529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.787774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.787784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.787963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.787973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.788112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.788122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.788303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.788313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.788510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.788519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.788789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.788799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.789061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.789071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.789190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.789199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.789326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.789336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.789536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.789546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.789745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.789754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.789888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.789897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.790095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.790104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.790299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.790309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.790501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.790511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.790788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.790798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.790989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.790999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.791244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.791254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.791433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.791443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.791700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.791710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.791962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.791972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.792186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.792195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.792395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.792405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.792689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.792699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.792883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.792893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.793081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.025 [2024-07-15 23:53:15.793090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.025 qpair failed and we were unable to recover it. 00:27:27.025 [2024-07-15 23:53:15.793320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.793331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.793446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.793456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.793635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.793645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.793844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.793854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.794109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.794119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.794371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.794382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.794640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.794650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.794831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.794841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.795033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.795042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.795183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.795193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.795437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.795447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.795645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.795655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.795864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.795874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.796004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.796017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.796210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.796219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.796481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.796491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.796750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.796760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.797001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.797011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.797289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.797299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.797560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.797571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.797819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.797829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.798101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.798111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.798427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.798437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.798705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.798715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.798963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.798973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.799237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.799247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.799446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.799456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.799702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.799711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.799958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.799968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.800154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.800164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.800411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.800421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.800621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.800631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.800897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.800907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.801195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.801205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.801546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.801558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.801806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.801816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.802107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.802117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.802387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.802397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.802676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.802686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.802874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.802884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.803092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.026 [2024-07-15 23:53:15.803102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.026 qpair failed and we were unable to recover it. 00:27:27.026 [2024-07-15 23:53:15.803324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.803334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.803535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.803544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.803765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.803775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.803992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.804003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.804335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.804345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.804662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.804672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.804918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.804927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.805131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.805141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.805370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.805380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.805570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.805580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.805713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.805723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.805915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.805925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.806173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.806184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.806376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.806386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.806651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.806661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.806911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.807130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.807140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.807362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.807373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.807641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.807651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.807851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.807861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.808081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.808091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.808315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.808325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.808597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.808607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.808790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.808800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.809109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.809119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.809314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.809325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.809512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.809522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.809811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.809820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.810044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.810054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.810312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.810322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.810619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.810629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.810807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.810817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.811106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.811116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.811362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.811372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.027 [2024-07-15 23:53:15.811642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.027 [2024-07-15 23:53:15.811652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.027 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.811930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.811939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.812125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.812135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.812431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.812441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.812685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.812695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.812973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.812983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.813256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.813266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.813541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.813550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.813819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.813829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.813960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.813970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.814223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.814236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.814483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.814492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.814781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.814790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.815059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.815069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.815334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.815344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.815566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.815576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.815851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.815860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.816040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.816050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.816269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.816281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.816409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.816419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.816664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.816674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.816801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.816811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.816997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.817007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.817273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.817283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.817465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.817474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.817743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.817753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.817889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.817898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.818185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.818195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.818447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.818457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.818699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.818709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.818888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.818898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.819080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.819090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.819362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.819372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.819669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.819679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.819941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.819951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.820169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.820179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.820378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.820389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.820658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.820668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.820917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.820927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.821231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.821241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.821531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.821540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.028 qpair failed and we were unable to recover it. 00:27:27.028 [2024-07-15 23:53:15.821824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.028 [2024-07-15 23:53:15.821834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.822086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.822095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.822344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.822354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.822553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.822563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.822810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.822821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.823127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.823136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.823315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.823326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.823528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.823537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.823724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.823733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.823993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.824002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.824329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.824340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.824541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.824551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.824825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.824835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.825118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.825128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.825346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.825356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.825603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.825612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.825816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.825825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.826096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.826107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.826390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.826400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.826666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.826676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.826896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.826906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.827182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.827192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.827389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.827399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.827540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.827550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.827687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.827697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.827959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.827968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.828190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.828199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.828426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.828436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.828697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.828707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.828955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.828965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.829209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.829219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.829421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.829432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.829701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.829711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.829924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.829934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.830182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.830192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.830312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.830322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.830522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.830532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.830774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.830784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.831056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.831066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.831288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.831298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.831434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.029 [2024-07-15 23:53:15.831445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.029 qpair failed and we were unable to recover it. 00:27:27.029 [2024-07-15 23:53:15.831631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.831640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.831825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.831834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.832109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.832119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.832411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.832422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.832621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.832630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.832761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.832770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.833040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.833049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.833312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.833322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.833510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.833520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.833708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.833718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.833914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.833924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.834246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.834256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.834510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.834519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.834786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.834796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.834931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.834940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.835191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.835201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.835425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.835435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.835709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.835719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.835849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.835858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.836044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.836054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.836246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.836257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.836457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.836467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.836737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.836747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.837003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.837013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.837172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.837183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.837458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.837468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.837648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.837658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.837959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.837969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.838160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.838170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.838367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.838377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.838582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.838591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.838805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.838815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.839060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.839069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.839340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.839351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.839601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.839611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.839743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.839752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.839943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.839953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.840226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.840236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.840483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.840493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.840786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.840796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.840989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.030 [2024-07-15 23:53:15.840999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.030 qpair failed and we were unable to recover it. 00:27:27.030 [2024-07-15 23:53:15.841248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.841259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.841508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.841518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.841792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.841803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.841996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.842006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.842195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.842205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.842468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.842478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.842775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.842785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.842915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.842925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.843144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.843153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.843377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.843387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.843651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.843661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.843860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.843869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.844167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.844176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.844433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.844443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.844718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.844728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.844923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.844933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.845198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.845208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.845508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.845518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.845765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.845775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.846042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.846052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.846267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.846277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.846556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.846566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.846860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.846870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.847126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.847135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.847411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.847421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.847668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.847677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.847908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.847918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.848108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.848118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.848365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.848375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.848642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.848651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.848831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.848841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.849108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.849117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.849367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.849377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.849623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.849633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.849850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.031 [2024-07-15 23:53:15.849859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.031 qpair failed and we were unable to recover it. 00:27:27.031 [2024-07-15 23:53:15.849994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.850004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.850278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.850288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.850535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.850544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.850789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.850799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.851046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.851055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.851329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.851339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.851524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.851534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.851827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.851838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.852041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.852051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.852322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.852332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.852613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.852622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.852821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.852831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.853132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.853142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.853413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.853423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.853677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.853686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.853950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.853960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.854105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.854115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.854341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.854351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.854620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.854630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.854845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.854855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.855058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.855068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.855316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.855327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.855469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.855478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.855658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.855668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.855816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.855825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.856101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.856111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.856404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.856415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.856630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.856640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.856837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.856847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.857031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.857041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.857289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.857299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.857492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.857502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.857698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.857708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.857977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.857986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.858280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.858290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.858551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.858561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.858746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.858755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.858954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.858964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.859234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.859244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.859428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.859438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.032 qpair failed and we were unable to recover it. 00:27:27.032 [2024-07-15 23:53:15.859623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.032 [2024-07-15 23:53:15.859634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.859912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.859922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.860222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.860236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.860427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.860437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.860727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.860737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.861006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.861016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.861281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.861291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.861549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.861560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.861828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.861838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.861964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.861974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.862193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.862202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.862385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.862395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.862698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.862708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.862912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.862922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.863113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.863123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.863322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.863332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.863595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.863604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.863874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.863884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.864178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.864188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.864448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.864459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.864756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.864766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.864897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.864906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.865177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.865187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.865439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.865449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.865696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.865706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.865971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.865980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.866159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.866168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.866347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.866357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.866492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.866503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.866646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.866656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.866849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.866860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.867039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.867049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.867322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.867333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.867602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.867612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.867810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.867819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.868033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.868042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.868258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.868268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.868557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.868567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.868839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.868849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.869107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.869117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.869308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.869318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.033 qpair failed and we were unable to recover it. 00:27:27.033 [2024-07-15 23:53:15.869583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.033 [2024-07-15 23:53:15.869593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.869791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.869801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.870019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.870029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.870154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.870164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.870384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.870394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.870665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.870675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.870947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.870959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.871155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.871164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.871387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.871397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.871617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.871627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.871830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.871840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.872048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.872058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.872263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.872273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.872471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.872481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.872733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.872743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.873014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.873024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.873277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.873287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.873525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.873535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.873744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.873754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.873941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.873950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.874130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.874140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.874268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.874278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.874536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.874546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.874843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.874853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.875100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.875109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.875369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.875379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.875648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.875658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.875942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.875952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.876223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.876240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.876510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.876520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.876799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.876809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.877004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.877014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.877261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.877272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.877543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.877553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.877820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.877830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.878099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.878108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.878366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.878376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.878563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.878573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.878842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.878852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.879149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.879159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.879417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.879428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.034 qpair failed and we were unable to recover it. 00:27:27.034 [2024-07-15 23:53:15.879698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.034 [2024-07-15 23:53:15.879708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.879840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.879850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.880119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.880129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.880379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.880389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.880632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.880641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.880902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.880914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.881028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.881038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.881282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.881293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.881499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.881509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.881780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.881790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.882074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.882084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.882352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.882361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.882556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.882566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.882859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.882868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.883138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.883148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.883356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.883367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.883587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.883597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.883796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.883805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.884051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.884060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.884336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.884346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.884615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.884624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.884899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.884909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.885160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.885170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.885389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.885400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.885692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.885702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.885928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.885937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.886201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.886211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.886415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.886425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.886708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.886718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.886991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.887001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.887185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.887195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.887487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.887498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.887703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.887713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.887896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.887906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.888169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.888179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.888455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.888465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.888691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.035 [2024-07-15 23:53:15.888701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.035 qpair failed and we were unable to recover it. 00:27:27.035 [2024-07-15 23:53:15.888945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.888955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.889247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.889257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.889501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.889511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.889779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.889789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.890044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.890053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.890234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.890244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.890433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.890443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.890715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.890725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.890843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.890855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.891127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.891137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.891406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.891417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.891681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.891691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.891974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.891984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.892206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.892216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.892421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.892430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.892677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.892686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.892878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.892888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.893135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.893144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.893365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.893375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.893583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.893593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.893863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.893873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.894159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.894169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.894432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.894443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.894671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.894680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.894823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.894833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.895115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.895125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.895319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.895330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.895456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.895465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.895654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.895664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.895910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.895920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.896169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.896178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.896452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.896462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.896586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.896596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.896851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.896861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.897067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.897077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.897273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.897283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.897487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.897496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.897766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.897776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.897967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.036 [2024-07-15 23:53:15.897977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.036 qpair failed and we were unable to recover it. 00:27:27.036 [2024-07-15 23:53:15.898171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.898181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.898377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.898387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.898678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.898688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.898963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.898973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.899189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.899198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.899397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.899407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.899703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.899712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.899981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.899990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.900187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.900197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.900467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.900479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.900727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.900737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.901007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.901017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.901210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.901219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.901421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.901432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.901679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.901689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.901964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.901974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.902104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.902114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.902304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.902314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.902585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.902595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.902806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.902815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.903065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.903075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.903267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.903277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.903549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.903559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.903768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.903778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.904063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.904073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.904261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.904271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.904565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.904575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.904846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.904856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.905101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.905110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.905365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.905375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.905624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.905633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.905900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.905909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.906161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.906171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.906443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.906453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.906689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.906699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.906879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.906889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.907137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.907147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.907420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.907430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.907627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.907637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.907926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.907936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.037 [2024-07-15 23:53:15.908217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.037 [2024-07-15 23:53:15.908229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.037 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.908419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.908429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.908635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.908645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.908939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.908949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.909228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.909239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.909487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.909496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.909677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.909687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.909957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.909967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.910163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.910173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.910458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.910470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.910676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.910686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.910904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.910914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.911093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.911103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.911289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.911300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.911560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.911569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.911768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.911778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.912023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.912033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.912304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.912315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.912570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.912580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.912760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.912770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.912896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.912906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.913174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.913183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.913409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.913420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.913564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.913574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.913780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.913790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.913990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.914000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.914248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.914258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.914525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.914535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.914813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.914822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.915006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.915016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.915312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.915323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.915572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.915582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.915843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.915852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.916059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.916069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.916291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.916302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.916499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.916509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.916781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.916791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.917009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.917019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.917266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.917276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.917461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.917471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.917757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.917767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.038 [2024-07-15 23:53:15.918079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.038 [2024-07-15 23:53:15.918089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.038 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.918376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.918387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.918598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.918608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.918803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.918813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.919109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.919119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.919304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.919314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.919563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.919573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.919840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.919849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.920106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.920117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.920321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.920332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.920629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.920639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.920852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.920862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.921130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.921140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.921360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.921370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.921642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.921652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.921926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.921935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.922206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.922215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.922408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.922418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.922712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.922721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.922975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.922985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.923242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.923252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.923509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.923519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.923738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.923748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.923946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.923956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.924155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.924165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.924419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.924430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.924702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.924712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.924892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.924901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.925169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.925179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.925488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.925498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.925630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.925640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.925908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.925917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.926164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.926174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.926355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.926365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.926609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.926619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.926890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.926900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.927149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.927159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.927340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.927350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.927550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.927560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.927773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.927783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.927995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.928005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.039 qpair failed and we were unable to recover it. 00:27:27.039 [2024-07-15 23:53:15.928203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.039 [2024-07-15 23:53:15.928213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.928361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.928371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.928595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.928604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.928849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.928860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.929143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.929153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.929347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.929358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.929548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.929558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.929823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.929835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.929964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.929973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.930231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.930241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.930439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.930449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.930697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.930707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.930978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.930987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.931238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.931248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.931441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.931451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.931696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.931706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.931975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.931985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.932236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.932246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.932391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.932401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.932615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.932625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.932871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.932881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.933085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.933094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.933282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.933293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.933473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.933483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.933745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.933755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.933980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.933990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.934191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.934200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.934471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.934481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.934768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.934778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.934889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.934899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.935138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.935148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.935370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.935380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.935560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.935569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.040 [2024-07-15 23:53:15.935820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.040 [2024-07-15 23:53:15.935829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.040 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.936152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.936186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.936459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.936475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.936711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.936725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.936999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.937012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.937239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.937254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.937481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.937494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.937760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.937774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.937963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.937977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.938203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.938217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.938424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.938438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.938663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.938676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.938934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.938948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.939130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.939144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.939347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.939361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.939514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.939528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.939723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.939737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.940022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.940036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.940293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.940307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.940560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.940573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.940838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.940851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.941114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.941127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.941430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.941444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.941719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.941733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.942021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.942035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.942346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.942360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.942568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.942582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.942779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.942792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.943045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.943061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.943272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.943286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.943409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.943422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.943608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.943622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.943846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.943859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.944095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.944109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.944380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.944402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.944666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.944679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.944884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.944898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.945102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.945115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.945394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.945408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.945663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.945677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.945954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.945967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.946199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.041 [2024-07-15 23:53:15.946213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.041 qpair failed and we were unable to recover it. 00:27:27.041 [2024-07-15 23:53:15.946474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.946488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.946703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.946717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.946943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.946956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.947229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.947242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.947438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.947452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.947737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.947750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.948024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.948038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.948305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.948319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.948507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.948521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.948818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.948831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.949084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.949098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.949376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.949391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.949615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.949628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.949891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.949906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.950122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.950135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.950391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.950405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.950704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.950718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.950973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.950987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.951257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.951271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.951548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.951562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.951768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.951781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.952035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.952049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.952254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.952268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.952455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.952468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.952677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.952690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.952887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.952900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.953106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.953120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.953316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.953330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.953471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.953484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.953712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.953726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.953944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.953957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.954249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.954263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.954451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.954465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.954718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.954731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.954949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.954962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.955102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.955116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.955315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.955329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.955543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.955557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.955752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.955766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.956019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.956033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.956175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.956192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.042 qpair failed and we were unable to recover it. 00:27:27.042 [2024-07-15 23:53:15.956400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.042 [2024-07-15 23:53:15.956414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.956613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.956626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.956815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.956829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.957080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.957094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.957359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.957373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.957567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.957581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.957719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.957733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.957942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.957955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.958233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.958247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.958529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.958543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.958748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.958762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.959045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.959059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.959255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.959269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.959375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0000 is same with the state(5) to be set 00:27:27.043 [2024-07-15 23:53:15.959604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.959616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.959768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.959777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.959907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.959916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.960184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.960194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.960489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.960499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.960694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.960704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.960884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.960894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.961093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.961102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.961322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.961332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.961578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.961587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.961814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.961823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.962020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.962030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.962232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.962242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.962497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.962507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.962744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.962754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.963025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.963035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.963216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.963229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.963486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.963496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.963773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.963782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.964075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.964085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.964280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.964290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.043 [2024-07-15 23:53:15.964566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.043 [2024-07-15 23:53:15.964576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.043 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.964770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.964780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.965070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.965080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.965288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.965298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.965571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.965581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.965787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.965800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.966023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.966033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.966167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.966177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.966395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.966405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.966599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.966608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.966741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.966750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.966996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.967007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.967298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.967308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.967498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.967507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.967809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.967819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.968068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.968078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.968213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.968223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.968486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.968496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.968680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.968690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.968988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.968998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.969271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.969281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.969540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.969551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.969790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.969800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.970012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.970022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.970287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.970297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.970492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.970502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.970698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.970708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.321 [2024-07-15 23:53:15.970889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.321 [2024-07-15 23:53:15.970898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.321 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.971116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.971126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.971393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.971403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.971593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.971603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.971753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.971762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.971946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.971956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.972093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.972104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.972250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.972260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.972563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.972572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.972759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.972769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.973041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.973051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.973243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.973253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.973502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.973512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.973753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.973763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.973908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.973918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.974128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.974137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.974355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.974366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.974549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.974558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.974741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.974754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.974952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.974961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.975160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.975170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.975427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.975437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.975696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.975706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.975904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.975913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.976033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.976043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.976301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.976311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.976508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.976518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.976829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.976839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.976993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.977003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.977271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.977282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.977426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.322 [2024-07-15 23:53:15.977435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.322 qpair failed and we were unable to recover it. 00:27:27.322 [2024-07-15 23:53:15.977633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.977643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.977863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.977873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.978068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.978078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.978323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.978334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.978606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.978615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.978813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.978822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.979087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.979096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.979299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.979310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.979624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.979634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.979896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.979906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.980154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.980164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.980389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.980400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.980624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.980633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.980901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.980911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.981104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.981114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.981385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.981395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.981617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.981626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.981726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.981735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.981920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.981930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.982175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.982184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.982453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.982463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.982716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.982726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.982938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.982948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.983213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.983223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.983354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.983364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.983555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.983565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.983812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.983821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.983953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.983964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.984201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.984211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.984427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.984438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.984696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.323 [2024-07-15 23:53:15.984706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.323 qpair failed and we were unable to recover it. 00:27:27.323 [2024-07-15 23:53:15.984918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.984928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.985173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.985182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.985380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.985390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.985556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.985566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.985811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.985821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.986079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.986088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.986285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.986295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.986496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.986506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.986759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.986769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.987025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.987035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.987237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.987247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.987526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.987536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.987823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.987832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.988102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.988112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.988362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.988372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.988617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.988627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.988843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.988853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.989064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.989073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.989342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.989352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.989606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.989615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.989895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.989904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.990130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.990141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.990399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.990410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.990657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.990667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.990868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.990877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.991076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.991086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.991337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.991348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.991484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.991494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.991627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.991636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.991915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.991925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.992059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.992069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.992298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.324 [2024-07-15 23:53:15.992309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.324 qpair failed and we were unable to recover it. 00:27:27.324 [2024-07-15 23:53:15.992531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.992541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.992825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.992834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.993022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.993032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.993221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.993241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.993431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.993442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.993577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.993586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.993762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.993772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.993913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.993923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.994070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.994080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.994268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.994278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.994479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.994489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.994714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.994724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.994999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.995009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.995215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.995229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.995441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.995450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.995718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.995728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.995921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.995930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.996125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.996134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.996336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.996346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.996467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.996477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.996721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.996731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.996895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.996905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.997132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.997142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.997401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.997411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.997675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.997685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.997953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.997963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.998097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.998107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.998360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.998371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.998647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.998657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.998923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.998933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.999154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.999164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.999424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.325 [2024-07-15 23:53:15.999442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.325 qpair failed and we were unable to recover it. 00:27:27.325 [2024-07-15 23:53:15.999722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:15.999732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:15.999955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:15.999965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.000214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.000228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.000436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.000446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.000575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.000585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.000856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.000866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.001118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.001128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.001373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.001384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.001600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.001610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.001907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.001916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.002086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.002096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.002283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.002293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.002503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.002514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.002681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.002691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.002948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.002958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.003203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.003212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.003407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.003417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.003694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.003704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.003930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.003939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.004186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.004196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.004414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.004424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.004640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.004650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.326 [2024-07-15 23:53:16.004837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.326 [2024-07-15 23:53:16.004847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.326 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.005093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.005103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.005360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.005370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.005570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.005580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.005769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.005779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.006043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.006053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.006317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.006328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.006521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.006531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.006778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.006788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.006965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.006975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.007174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.007184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.007367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.007377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.007641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.007650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.007840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.007850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.008037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.008047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.008247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.008257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.008472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.008481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.008591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.008601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.008729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.008739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.008976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.008986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.009186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.009196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.009349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.009359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.009559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.009569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.009771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.009781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.010048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.010058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.010155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.010164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.010432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.010442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.010570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.010580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.010780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.010789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.011054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.011064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.011322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.011335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.011534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.011543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.327 qpair failed and we were unable to recover it. 00:27:27.327 [2024-07-15 23:53:16.011793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.327 [2024-07-15 23:53:16.011803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.012080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.012090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.012218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.012232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.012463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.012473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.012758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.012768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.012909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.012918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.013163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.013172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.013313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.013324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.013564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.013573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.013778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.013788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.013992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.014001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.014251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.014261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.014559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.014569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.014760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.014770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.015046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.015056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.015251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.015262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.015475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.015484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.015730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.015740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.015934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.015944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.016133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.016142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.016411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.016422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.016621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.016631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.016851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.016861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.017106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.017115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.017327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.017338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.017609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.017618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.017921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.017930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.018068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.018078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.018359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.018369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.018613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.018624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.018839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.018849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.019061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.019071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.328 qpair failed and we were unable to recover it. 00:27:27.328 [2024-07-15 23:53:16.019320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.328 [2024-07-15 23:53:16.019331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.019527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.019537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.019734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.019744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.019963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.019972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.020222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.020235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.020418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.020427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.020625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.020637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.020817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.020827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.021062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.021072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.021205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.021215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.021485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.021495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.021740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.021750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.022001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.022010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.022280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.022290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.022538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.022548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.022678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.022688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.022960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.022970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.023099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.023109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.023255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.023265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.023500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.023510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.023650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.023660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.023875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.023885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.024062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.024071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.024347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.024357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.024628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.024637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.024903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.024913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.025044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.025053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.025333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.025344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.025587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.025597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.025772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.025782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.025959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.025969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.329 [2024-07-15 23:53:16.026270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.329 [2024-07-15 23:53:16.026281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.329 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.026406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.026416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.026692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.026702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.026968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.026978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.027236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.027246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.027519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.027529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.027749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.027759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.028028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.028038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.028351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.028361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.028630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.028640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.028915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.028925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.029192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.029202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.029400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.029410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.029684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.029694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.029875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.029885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.030154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.030166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.030436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.030447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.030630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.030639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.030836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.030845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.031123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.031133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.031263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.031273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.031511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.031521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.031668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.031677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.031916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.031926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.032191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.032201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.032380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.032391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.032660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.032670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.032890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.032900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.033170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.033180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.033377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.033388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.033652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.033662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.033958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.330 [2024-07-15 23:53:16.033968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.330 qpair failed and we were unable to recover it. 00:27:27.330 [2024-07-15 23:53:16.034149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.034159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.034289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.034299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.034480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.034490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.034757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.034766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.035013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.035023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.035304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.035314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.035510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.035521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.035766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.035776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.035960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.035969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.036159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.036170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.036360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.036371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.036636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.036646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.036874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.036884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.037078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.037088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.037365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.037375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.037662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.037672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.037903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.037913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.038096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.038106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.038326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.038336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.038488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.038498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.038767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.038776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.039068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.039078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.039345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.039356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.039618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.039630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.039848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.039858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.040132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.040142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.040358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.040368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.040551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.040561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.040813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.040823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.041089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.331 [2024-07-15 23:53:16.041099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.331 qpair failed and we were unable to recover it. 00:27:27.331 [2024-07-15 23:53:16.041351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.041361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.041610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.041619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.041820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.041830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.042026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.042035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.042288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.042298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.042525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.042535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.042819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.042829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.043098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.043108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.043326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.043337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.043536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.043546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.043822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.043832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.044116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.044125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.044335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.044345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.044593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.044603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.044796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.044806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.045077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.045087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.045343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.045353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.045550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.045560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.045752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.045762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.046043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.046053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.046343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.046353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.046616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.046626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.046897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.046907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.047121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.047131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.047377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.332 [2024-07-15 23:53:16.047387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.332 qpair failed and we were unable to recover it. 00:27:27.332 [2024-07-15 23:53:16.047659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.047669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.047919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.047929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.048201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.048211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.048414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.048425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.048603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.048613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.048882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.048892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.049157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.049166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.049361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.049372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.049563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.049575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.049795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.049805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.050006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.050016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.050283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.050294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.050533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.050544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.050803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.050813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.051018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.051028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.051317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.051327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.051601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.051611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.051869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.051879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.052153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.052163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.052411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.052421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.052619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.052629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.052811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.052821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.052947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.052957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.053250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.053261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.053373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.053382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.053521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.053799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.053810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.054006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.054016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.054313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.054324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.054569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.054579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.054756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.054766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.333 [2024-07-15 23:53:16.055014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.333 [2024-07-15 23:53:16.055024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.333 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.055294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.055305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.055433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.055443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.055700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.055710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.055971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.055981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.056232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.056242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.056498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.056508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.056790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.056800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.057068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.057078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.057289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.057300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.057579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.057588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.057767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.057777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.057978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.057988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.058234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.058244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.058516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.058526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.058778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.058788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.058977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.058987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.059232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.059245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.059471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.059481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.059682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.059693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.059986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.059996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.060242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.060253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.060526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.060536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.060782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.060792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.060925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.060935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.061181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.061190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.061436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.061446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.061719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.061729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.062004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.062014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.062193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.062203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.062468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.062478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.062660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.062670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.062877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.062887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.334 [2024-07-15 23:53:16.063107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.334 [2024-07-15 23:53:16.063117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.334 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.063315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.063325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.063623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.063633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.063877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.063887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.064131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.064140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.064414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.064424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.064672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.064682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.064877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.064887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.065085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.065095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.065231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.065241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.065530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.065540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.065784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.065795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.066037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.066046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.066238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.066248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.066472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.066482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.066777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.066787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.066989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.067000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.067248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.067259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.067460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.067470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.067721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.067731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.067921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.067931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.068141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.068150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.068343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.068354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.068546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.068556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.068826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.068835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.069081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.069091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.069337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.069347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.069528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.069538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.069787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.069797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.070042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.070052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.070249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.070260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.070465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.070475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.070696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.335 [2024-07-15 23:53:16.070705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.335 qpair failed and we were unable to recover it. 00:27:27.335 [2024-07-15 23:53:16.070972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.070982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.071229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.071239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.071424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.071434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.071703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.071712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.071983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.071993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.072198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.072208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.072497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.072507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.072637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.072648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.072915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.072925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.073181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.073191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.073333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.073343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.073527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.073537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.073818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.073828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.074125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.074135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.074390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.074676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.074687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.074877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.074887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.075086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.075095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.075240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.075252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.075518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.075528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.075752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.075762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.075943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.075953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.076251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.076261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.076457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.076466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.076645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.076655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.076925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.076935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.077205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.077215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.077442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.077467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.077779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.077813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.078056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.078072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.336 qpair failed and we were unable to recover it. 00:27:27.336 [2024-07-15 23:53:16.078276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.336 [2024-07-15 23:53:16.078290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.078486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.078500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.078704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.078717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.078917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.078930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.079207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.079220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.079451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.079465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.079724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.079737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.079945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.079958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.080246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.080260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.080484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.080498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.080762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.080775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.080976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.080990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.081187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.081201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.081414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.081428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.081684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.081697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.081891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.081905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.082177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.082190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.082333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.082347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.082620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.082634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.082938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.082952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.083251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.083265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.083488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.083501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.083660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.083673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.083863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.083876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.337 [2024-07-15 23:53:16.084145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.337 [2024-07-15 23:53:16.084159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.337 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.084461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.084475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.084678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.084691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.084951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.084964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.085197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.085213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.085488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.085502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.085784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.085797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.086014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.086028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.086176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.086189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.086468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.086482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.086697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.086710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.086982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.086996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.087295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.087309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.087585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.087599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.087890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.087903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.088180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.088194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.088462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.088476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.088712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.088726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.088990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.089003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.089272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.089286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.089588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.089601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.089800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.089813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.090077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.090091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.090351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.090365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.090626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.090639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.090918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.090932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.091159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.091173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.091435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.091448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.091724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.091737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.091965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.091979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.092237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.092251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.092531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.092545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.092850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.092863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.338 [2024-07-15 23:53:16.093092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.338 [2024-07-15 23:53:16.093106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.338 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.093369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.093383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.093528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.093541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.093820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.093833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.094110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.094123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.094258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.094272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.094546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.094559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.094846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.094859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.095068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.095082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.095284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.095297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.095577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.095591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.095872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.095889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.096149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.096162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.096388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.096402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.096608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.096622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.096899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.096912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.097101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.097114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.097378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.097392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.097593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.097607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.097799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.097812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.098040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.098054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.098334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.098350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.098609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.098623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.098874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.098887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.099144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.099158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.099442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.099456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.099740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.099754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.099945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.099959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.100242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.100257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.100456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.100469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.100747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.100760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.101015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.101029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.101308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.101322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.101579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.101593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.339 qpair failed and we were unable to recover it. 00:27:27.339 [2024-07-15 23:53:16.101821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.339 [2024-07-15 23:53:16.101834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.102022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.102036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.102180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.102193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.102393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.102407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.102687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.102701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.102978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.102991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.103191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.103205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.103419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.103433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.103633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.103646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.103891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.103904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.104174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.104188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.104422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.104436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.104707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.104721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.104923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.104936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.105220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.105237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.105379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.105392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.105672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.105685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.105992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.106007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.106296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.106310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.106592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.106605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.106805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.106818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.107073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.107086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.107389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.107402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.107606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.107620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.107918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.107931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.108138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.108152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.108412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.108426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.108683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.108697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.108828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.108841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.109047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.109060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.109267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.109281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.340 [2024-07-15 23:53:16.109522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.340 [2024-07-15 23:53:16.109536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.340 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.109756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.109770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.109973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.109986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.110125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.110138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.110422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.110435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.110689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.110702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.110910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.110924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.111210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.111227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.111533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.111546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.111693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.111706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.111980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.111994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.112214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.112231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.112435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.112449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.112736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.112749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.113025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.113039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.113345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.113359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.113625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.113639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.113853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.113866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.114174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.114187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.114462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.114476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.114745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.114759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.114907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.115123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.115136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.115413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.115427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.115629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.115643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.115921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.115934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.116123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.116138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.116422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.116435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.116724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.116737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.117019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.117033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.117290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.117304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.117531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.117545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.117827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.117841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.118103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.341 [2024-07-15 23:53:16.118117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.341 qpair failed and we were unable to recover it. 00:27:27.341 [2024-07-15 23:53:16.118344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.118358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.118635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.118648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.118912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.118925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.119140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.119154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.119381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.119395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.119586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.119599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.119820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.119833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.120057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.120071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.120378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.120392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.120670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.120684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.120955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.120969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.121173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.121187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.121490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.121504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.121760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.121775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.122037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.122051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.122241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.122255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.122526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.122540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.122748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.122761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.123001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.123014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.123239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.123253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.123534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.123548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.123771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.123785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.124054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.124068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.124279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.124293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.124549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.124564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.124851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.124866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.125149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.125163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.125383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.125397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.125675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.125688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.125833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.125847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.342 qpair failed and we were unable to recover it. 00:27:27.342 [2024-07-15 23:53:16.126125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.342 [2024-07-15 23:53:16.126139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.126356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.126370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.126640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.126657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.126925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.126940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.127141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.127154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.127466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.127480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.127679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.127693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.127908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.127922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.128130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.128143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.128372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.128387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.128668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.128681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.128887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.128901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.129181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.129194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.129402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.129416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.129605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.129618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.129868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.129882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.130165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.130179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.130376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.130390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.130659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.130673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.130813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.130827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.131104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.131119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.131328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.131342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.131492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.131506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.131835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.343 [2024-07-15 23:53:16.131849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.343 qpair failed and we were unable to recover it. 00:27:27.343 [2024-07-15 23:53:16.132114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.132128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.132383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.132398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.132652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.132667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.132903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.132919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.133127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.133141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.133390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.133404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.133665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.133679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.133829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.133843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.134050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.134063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.134284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.134298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.134559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.134573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.134826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.134840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.135051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.135065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.135220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.135238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.135447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.135460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.135681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.135694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.135914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.135927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.136135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.136148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.136446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.136464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.136768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.136782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.137076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.137090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.137320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.137334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.137477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.137490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.137695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.137709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.137866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.137880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.138079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.138093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.138320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.138334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.138589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.138603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.138825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.138839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.139122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.344 [2024-07-15 23:53:16.139136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.344 qpair failed and we were unable to recover it. 00:27:27.344 [2024-07-15 23:53:16.139404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.139418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.139635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.139648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.139955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.139969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.140184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.140197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.140503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.140518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.140744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.140758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.141077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.141091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.141289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.141303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.141537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.141551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.141757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.141771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.142079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.142093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.142296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.142310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.142499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.142512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.142651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.142665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.142895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.142908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.143141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.143155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.143434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.143445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.143604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.143614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.143814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.143824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.144092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.144102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.144364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.144374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.144558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.144568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.144843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.144853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.145158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.145168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.145414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.145424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.145669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.145679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.145879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.145889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.146163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.146173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.146435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.146447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.345 [2024-07-15 23:53:16.146576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.345 [2024-07-15 23:53:16.146586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.345 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.146860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.146870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.147056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.147065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.147209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.147219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.147511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.147521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.147727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.147737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.148002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.148011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.148287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.148297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.148494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.148503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.148789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.148799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.149092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.149101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.149289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.149300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.149447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.149456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.149724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.149735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.149872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.149882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.150150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.150160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.150420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.150430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.150623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.150633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.150828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.150838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.151107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.151116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.151322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.151332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.151600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.151609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.151857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.151867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.152073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.152082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.152278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.152288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.152493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.152503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.152659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.152669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.152873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.152883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.153012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.153022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.153201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.153211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.153486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.153496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.346 [2024-07-15 23:53:16.153765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.346 [2024-07-15 23:53:16.153775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.346 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.153965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.153975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.154171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.154181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.154383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.154393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.154587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.154596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.154861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.154871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.155147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.155158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.155304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.155314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.155559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.155572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.155719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.155729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.156021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.156031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.156208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.156219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.156537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.156547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.156767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.156777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.157074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.157085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.157282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.157293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.157442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.157453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.157650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.157660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.157854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.157864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.158062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.158072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.158325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.158335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.158529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.158539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.158670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.158679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.158922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.158933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.159180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.159190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.159393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.159403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.159586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.159596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.159861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.159871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.160063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.160073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.160321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.160331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.160459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.160470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.160652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.160663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.347 qpair failed and we were unable to recover it. 00:27:27.347 [2024-07-15 23:53:16.160857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.347 [2024-07-15 23:53:16.160868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.161059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.161069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.161275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.161286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.161529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.161539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.161788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.161798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.162104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.162113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.162340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.162350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.162618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.162628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.162901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.162911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.163181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.163191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.163419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.163430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.163579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.163589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.163731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.163741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.164013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.164024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.164325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.164336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.164556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.164566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.164811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.164823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.165092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.165102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.165298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.165308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.165492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.165503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.165688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.165698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.165893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.165903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.166103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.166113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.166358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.166369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.166616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.166626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.166823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.166833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.167020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.167032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.167298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.167309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.348 qpair failed and we were unable to recover it. 00:27:27.348 [2024-07-15 23:53:16.167541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.348 [2024-07-15 23:53:16.167551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.167739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.167750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.167966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.167976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.168187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.168198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.168480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.168491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.168682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.168693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.168982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.168992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.169170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.169180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.169444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.169455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.169653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.169663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.169848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.169858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.170105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.170115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.170317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.170327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.170636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.170646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.170848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.170857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.171064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.171074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.171283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.171294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.171569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.171579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.171770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.171780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.171984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.171994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.172199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.172209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.172436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.172447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.172720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.172731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.172947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.172957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.173223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.173237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.173445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.173456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.173647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.349 [2024-07-15 23:53:16.173657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.349 qpair failed and we were unable to recover it. 00:27:27.349 [2024-07-15 23:53:16.173935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.173945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.174235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.174250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.174529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.174539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.174754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.174764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.174948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.174958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.175210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.175220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.175509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.175520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.175765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.175775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.175999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.176009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.176271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.176281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.176479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.176489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.176735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.176745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.176991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.177001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.177246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.177257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.177525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.177536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.177674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.177684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.177935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.177945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.178166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.178176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.178308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.178319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.178513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.178523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.178712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.178722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.178993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.179003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.179149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.179159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.179356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.179366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.179557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.179567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.179702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.179712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.179916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.179926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.180070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.180080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.180294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.180305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.180555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.180564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.180854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.180864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.181059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.350 [2024-07-15 23:53:16.181069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.350 qpair failed and we were unable to recover it. 00:27:27.350 [2024-07-15 23:53:16.181362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.181373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.181619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.181629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.181878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.181888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.182106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.182118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.182403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.182413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.182621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.182631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.182879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.182889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.183162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.183174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.183319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.183330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.183522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.183534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.183832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.183843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.184086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.184097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.184326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.184337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.184601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.184611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.184861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.184871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.185101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.185111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.185413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.185424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.185633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.185642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.185911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.185921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.186124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.186134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.186399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.186410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.186593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.186604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.186906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.186917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.187172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.187182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.187463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.187474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.187697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.187707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.187919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.187929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.188139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.188149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.188405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.188415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.188620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.188630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.188876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.188886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.351 qpair failed and we were unable to recover it. 00:27:27.351 [2024-07-15 23:53:16.189203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.351 [2024-07-15 23:53:16.189213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.189542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.189552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.189839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.189849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.190163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.190173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.190458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.190468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.190770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.190802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.191097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.191112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.191367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.191382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.191643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.191657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.191916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.191930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.192134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.192148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.192382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.192396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.192680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.192695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.192849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.192863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.193089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.193104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.193375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.193392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.193602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.193615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.193820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.193834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.194041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.194055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.194198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.194212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.194351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.194363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.194640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.194650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.194775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.194785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.195047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.195057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.195283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.195293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.195434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.195444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.195580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.195591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.195795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.195805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.196097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.196107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.196296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.352 [2024-07-15 23:53:16.196308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.352 qpair failed and we were unable to recover it. 00:27:27.352 [2024-07-15 23:53:16.196462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.196472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.196696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.196706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.196938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.196954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.197260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.197275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.197503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.197517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.197722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.197735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.198043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.198057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.198320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.198334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.198595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.198609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.198889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.198902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.199175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.199188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.199395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.199410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.199633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.199647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.199900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.199914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.200156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.200170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.200389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.200403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.200552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.200565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.200857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.200871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.201148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.201162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.201370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.201383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.201641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.201655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.201798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.201811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.202117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.202130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.202339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.202353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.202654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.202668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.202856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.202871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.203103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.203116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.203272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.203290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.203504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.203519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.203728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.203748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.203978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.203992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.204187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.204201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.204435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.353 [2024-07-15 23:53:16.204449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.353 qpair failed and we were unable to recover it. 00:27:27.353 [2024-07-15 23:53:16.204677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.204691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.204907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.204921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.205134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.205147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.205356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.205370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.205502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.205516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.205674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.205687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.205885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.205899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.206122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.206135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.206359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.206373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.206633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.206646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.206859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.206873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.207182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.207197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.207403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.207417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.207664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.207678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.207876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.207889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.208198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.208212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.208442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.208462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.208665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.208679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.208887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.208901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.209176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.209189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.209342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.209357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.209496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.209510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.209733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.209747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.210002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.210020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.210243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.210257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.210413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.210427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-15 23:53:16.210590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.354 [2024-07-15 23:53:16.210603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.210800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.210814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.211012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.211027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.211302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.211316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.211525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.211539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.211738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.211751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.212035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.212048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.212333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.212348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.212553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.212567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.212759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.212773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.213029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.213043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.213304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.213321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.213590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.213605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.213868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.213882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.214081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.214094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.214376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.214390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.214613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.214626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.214836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.214850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.215128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.215142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.215403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.215417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.215621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.215635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.215841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.215855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.216139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.216152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.216350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.216365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.216571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.216585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.216781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.216795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.217022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.217036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.217236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.217251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.217527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.217540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.217746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.217760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.217980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.355 [2024-07-15 23:53:16.217993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-15 23:53:16.218303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.218319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.218577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.218590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.218833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.218847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.219145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.219158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.219422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.219436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.219666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.219679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.219932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.219949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.220235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.220249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.220535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.220549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.220771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.220784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.221018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.221032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.221349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.221363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.221640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.221654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.221807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.221820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.222075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.222088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.222390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.222404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.222561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.222575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.222807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.222821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.223144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.223157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.223401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.223415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.223673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.223687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.223977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.223991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.224292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.224306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.224578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.224592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.224801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.224814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.225047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.225060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.225252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.225267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.225521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.225534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.225735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.225749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.225961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.225975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-15 23:53:16.226249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.356 [2024-07-15 23:53:16.226264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.226489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.226503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.226775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.226789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.226960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.226974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.227193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.227206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.227448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.227462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.227733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.227746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.227891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.227906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.228184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.228198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.228443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.228456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.228664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.228678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.228931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.228944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.229176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.229189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.229383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.229397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.229587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.229601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.229899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.229913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.230069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.230086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.230434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.230448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.230676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.230689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.230959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.230973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.231173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.231187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.231395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.231409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.231685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.231699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.231929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.231943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.232237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.232251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.232408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.232422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.232702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.232715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.232999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.233012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.233161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.233174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.233372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.233386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.233603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.233617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.357 qpair failed and we were unable to recover it. 00:27:27.357 [2024-07-15 23:53:16.233779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.357 [2024-07-15 23:53:16.233792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.234059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.234073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.234259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.234273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.234505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.234518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.234717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.234730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.235024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.235038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.235313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.235328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.235487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.235500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.235758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.235772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.236080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.236093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.236319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.236332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.236587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.236600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.236800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.236814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.237033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.237046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.237325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.237340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.237482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.237496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.237628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.237642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.237853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.237867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.238073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.238086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.238303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.238317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.238456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.238470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.238708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.238722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.239018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.239031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.239311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.239325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.239477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.239491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.239690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.239706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.239909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.239922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.240107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.240121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.240404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.240417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.240698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.240712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.240846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.240860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.358 qpair failed and we were unable to recover it. 00:27:27.358 [2024-07-15 23:53:16.241047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.358 [2024-07-15 23:53:16.241060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.241294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.241308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.241459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.241680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.241694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.241815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.241829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.241961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.241974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.242255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.242269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.242478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.242491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.242646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.242660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.242955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.242969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.243229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.243244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.243523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.243536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.243677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.243690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.243991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.244004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.244232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.244246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.244519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.244533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.244740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.244754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.245041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.245054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.245257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.245272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.245501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.245514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.245717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.245730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.245955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.245970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.246233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.246247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.246500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.246514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.246818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.246831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.247135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.247148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.247384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.247398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.247598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.247611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.247765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.247778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.359 qpair failed and we were unable to recover it. 00:27:27.359 [2024-07-15 23:53:16.248082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.359 [2024-07-15 23:53:16.248096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.248302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.248316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.248520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.248533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.248666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.248680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.248883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.248897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.249118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.249133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.249334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.249348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.249631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.249644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.249800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.249814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.250094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.250108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.250412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.250426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.250563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.250577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.250787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.250801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.251085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.251098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.251389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.251403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.251604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.251619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.251771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.251785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.251999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.252013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.252161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.252175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.252387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.252402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.252680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.252693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.252894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.252907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.253194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.253207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.253411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.253425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.360 qpair failed and we were unable to recover it. 00:27:27.360 [2024-07-15 23:53:16.253718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.360 [2024-07-15 23:53:16.253732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.253892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.253905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.254164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.254177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.254369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.254383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.254588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.254601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.254833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.254847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.255061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.255074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.255343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.255357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.255605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.255625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.255879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.255893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.256195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.256209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.256358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.256382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.256638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.256652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.256804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.256817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.257046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.257060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.257247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.257262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.257571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.257585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.257781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.257796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.258067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.258080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.258366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.258380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.258586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.258599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.258750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.258767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.259027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.259041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.259334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.259348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.259548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.259563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.259811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.259824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.260040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.260053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.260240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.260254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.260506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.260520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.260662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.260676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.260830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.260844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.261099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.261112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.361 qpair failed and we were unable to recover it. 00:27:27.361 [2024-07-15 23:53:16.261371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.361 [2024-07-15 23:53:16.261386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.261597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.261610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.261868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.261882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.262142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.262155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.262297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.262311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.262521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.262535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.262792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.262806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.263080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.263093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.263328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.263343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.263536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.263549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.263802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.263815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.264105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.264135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.264386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.264416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.264591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.264621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.264853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.264867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.265092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.265106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.265286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.265306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.265526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.265539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.265752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.265781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.266091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.266121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.266435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.266467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.266657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.266687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.266962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.266992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.267287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.267319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.267568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.267597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.267959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.267989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.268265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.268297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.268565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.268594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.268851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.268880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.269054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.269083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.269405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.269438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.269756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.269786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.270019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.362 [2024-07-15 23:53:16.270049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.362 qpair failed and we were unable to recover it. 00:27:27.362 [2024-07-15 23:53:16.270406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.270437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.270636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.270666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.271015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.271036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.271352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.271371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.271640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.271655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.271863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.271877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.272131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.272146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.272302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.272333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.272582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.272612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.272835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.272865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.273081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.273115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.273388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.273402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.273560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.273573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.273855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.273869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.274145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.274175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.274473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.274503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.274739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.274768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.275091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.275120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.275297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.275328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.275505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.363 [2024-07-15 23:53:16.275535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.363 qpair failed and we were unable to recover it. 00:27:27.363 [2024-07-15 23:53:16.275784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.275798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.276010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.276024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.276165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.276179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.276486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.276500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.276710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.276723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.276884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.276898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.277107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.277121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.277377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.277391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.277537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.277551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.277704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.277718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.277933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.277946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.278138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.278151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.278422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.278436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.278695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.278709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.279004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.279018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.279285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.279299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.279514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.279529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.279799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.279813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.280065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.280095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.280360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.280391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.280578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.280607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.280795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.280824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.281059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.281089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.281428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.281458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.281649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.281678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.281868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.281897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.282238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.282269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.282462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.282492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.282762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.282799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.283029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.283043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.283258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.283275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.283490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.283520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-07-15 23:53:16.283810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-07-15 23:53:16.283839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.284159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.284172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.284374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.284388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.284586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.284599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.284855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.284868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.285199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.285239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.285493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.285523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.285835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.285864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.286103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.286133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.286396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.286426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.286617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.286645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.286950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.286965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.287249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.287279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.287528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.287557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.287755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.287784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.288046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.288060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.288356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.288370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.288525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.288539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.288753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.288767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.288913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.288942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.289267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.289297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.289535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.289565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.289810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.289840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.290157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.290186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.290459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.290491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.290839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.290869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.291103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.291133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.291426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.291464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.291668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.291681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.291868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.291881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.292134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.292148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.292270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.292284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.292446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.292460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.292675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.292689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.292894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.292923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.293245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.293276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.293576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.293613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.293817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.293830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.294034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.294050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.294254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.294270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.294494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-07-15 23:53:16.294508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-07-15 23:53:16.294768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.294781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.295135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.295164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.295413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.295445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.295663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.295676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.295875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.295905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.296206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.296246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.296435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.296464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.296705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.296734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.297048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.297061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.297263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.297279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.297487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.297501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.297710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.297724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.297949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.297962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.298176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.298190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.298390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.298406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.298669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.298698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.298893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.298923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.299214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.299256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.299493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.299522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.299812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.299843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.300132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.300162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.300454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.300485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.300711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.300740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.301048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.301077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.301350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.301382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.301630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.301659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.301953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.301982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.302311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.302342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.302529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.302558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.302789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.302818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.303042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.303072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.303341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.303372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.303626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.303655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.303890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.303919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.304208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.304261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.304460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.304473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.304682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.304696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.304936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.304968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.305166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.305196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.305457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.305489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.305710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.641 [2024-07-15 23:53:16.305723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.641 qpair failed and we were unable to recover it. 00:27:27.641 [2024-07-15 23:53:16.305879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.305893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.306166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.306196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.306445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.306476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.306718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.306731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.306944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.306973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.307199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.307240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.307510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.307540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.307735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.307765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.308119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.308148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.308410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.308441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.308692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.308722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.309071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.309100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.309433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.309464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.309664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.309693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.309878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.309908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.310258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.310289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.310544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.310574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.310885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.310914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.311154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.311183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.311388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.311418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.311595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.311623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.311821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.311850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.312154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.312183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.312464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.312494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.312691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.312720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.313050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.313079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.313314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.313346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.313524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.313553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.313759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.313772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.314059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.314072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.314314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.314345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.314523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.314554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.314728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.314742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.314950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.314980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.315222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.315261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.315516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.315545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.315795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.315811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.316125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.316154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.316424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.316455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.316698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.316728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.642 [2024-07-15 23:53:16.316914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.642 [2024-07-15 23:53:16.316944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.642 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.317127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.317156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.317396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.317426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.317747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.317777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.318113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.318143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.318371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.318401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.318604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.318633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.318925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.318938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.319195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.319231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.319488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.319518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.319721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.319751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.320073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.320103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.320339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.320370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.320701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.320730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.320964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.320995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.321185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.321214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.321470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.321500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.321751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.321781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.322133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.322163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.322395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.322427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.322619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.322648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.322896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.322925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.323246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.323276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.323596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.323626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.323812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.323842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.324133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.324162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.324461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.324490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.324688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.324718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.324900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.324930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.325160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.325190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.325469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.325500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.325732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.325746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.325993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.643 [2024-07-15 23:53:16.326007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.643 qpair failed and we were unable to recover it. 00:27:27.643 [2024-07-15 23:53:16.326206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.326220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.326467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.326481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.326755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.326769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.327027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.327061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.327325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.327355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.327604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.327634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.327937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.327967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.328153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.328182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.328483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.328513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.328776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.328805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.329129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.329159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.329431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.329462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.329685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.329699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.329911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.329925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.330072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.330085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.330344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.330358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.330591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.330620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.330823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.330853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.331109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.331139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.331323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.331354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.331538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.331568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.331812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.331842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.332152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.332182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.332379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.332409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.332701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.332733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.333038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.333068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.333298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.333329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.333575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.333604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.333813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.333842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.334086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.334115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.334377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.334407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.334701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.334731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.335058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.335087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.335334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.335365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.335551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.335580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.335805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.335834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.336059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.336088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.336410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.336441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.336670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.336700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.337028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.337057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.337378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.644 [2024-07-15 23:53:16.337409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.644 qpair failed and we were unable to recover it. 00:27:27.644 [2024-07-15 23:53:16.337719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.337749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.338009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.338022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.338261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.338277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.338536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.338550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.338764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.338778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.339100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.339114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.339324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.339355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.339648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.339677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.339876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.339905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.340086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.340115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.340347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.340379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.340570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.340599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.340893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.340923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.341247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.341278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.341525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.341554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.341820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.341849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.342165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.342195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.342459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.342490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.342750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.342779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.343067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.343080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.343343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.343357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.343573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.343587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.343738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.343752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.343892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.343906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.344210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.344249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.344568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.344597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.344821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.344835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.345107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.345121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.345324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.345354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.345560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.345589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.345785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.345814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.346019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.346049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.346236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.346267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.346506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.346535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.346721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.346750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.346992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.347023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.347319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.347350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.347615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.347645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.347943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.347973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.348273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.348303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.645 [2024-07-15 23:53:16.348483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.645 [2024-07-15 23:53:16.348513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.645 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.348704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.348733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.349004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.349038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.349352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.349383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.349648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.349677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.349876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.349906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.350095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.350124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.350366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.350397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.350633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.350662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.351029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.351059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.351287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.351318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.351627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.351657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.351910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.351939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.352273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.352580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.352609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.352886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.352916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.353188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.353201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.353402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.353417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.353640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.353670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.353942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.353972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.354246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.354277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.354514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.354543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.354737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.354766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.355107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.355137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.355433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.355464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.355783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.355812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.356165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.356195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.356418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.356448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.356692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.356721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.357076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.357107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.357381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.357411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.357601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.357630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.357955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.357983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.358315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.358346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.358598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.358627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.358811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.358839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.359177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.359207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.359476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.359506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.359671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.359685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.359932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.359960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.360132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.360161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.646 [2024-07-15 23:53:16.360410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.646 [2024-07-15 23:53:16.360453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.646 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.360742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.360758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.360920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.360934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.361146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.361161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.361386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.361401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.361563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.361593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.361850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.361879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.362181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.362211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.362461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.362490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.362766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.362796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.363129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.363158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.363396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.363427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.363622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.363652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.363926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.363939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.364151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.364165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.364365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.364379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.364591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.364605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.364811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.364825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.365062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.365077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.365237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.365251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.365453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.365466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.365725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.365754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.366064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.366094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.366411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.366441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.366686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.366716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.367027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.367057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.367400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.367432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.367683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.367712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.368010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.368080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.368291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.368326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.368625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.368655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.368842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.368856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.369127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.369157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.369405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.369436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.369750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.369780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.647 qpair failed and we were unable to recover it. 00:27:27.647 [2024-07-15 23:53:16.370065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.647 [2024-07-15 23:53:16.370098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.370362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.370393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.370689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.370719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.370970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.370999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.371267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.371314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.371562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.371592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.371793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.371831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.372074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.372088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.372303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.372318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.372530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.372544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.372753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.372767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.373078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.373117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.373413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.373444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.373740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.373769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.374096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.374125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.374434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.374465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.374762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.374791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.375040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.375070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.375428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.375458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.375702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.375732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.375989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.376018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.376350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.376381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.376636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.376665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.377001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.377031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.377285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.377306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.377583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.377612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.377959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.377989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.378253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.378283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.378540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.378569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.378836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.378865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.379166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.379180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.379376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.379390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.379608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.379637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.379830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.379860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.380113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.380143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.380442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.380473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.380646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.380675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.380856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.380869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.381086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.381115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.381391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.381421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.381680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.648 [2024-07-15 23:53:16.381710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.648 qpair failed and we were unable to recover it. 00:27:27.648 [2024-07-15 23:53:16.381986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.382001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.382290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.382305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.382526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.382556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.382828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.382858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.383204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.383243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.383442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.383478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.383780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.383810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.384130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.384160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.384471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.384502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.384770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.384799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.385064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.385106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.385433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.385465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.385761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.385790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.386063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.386092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.386367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.386397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.386647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.386676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.386893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.386906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.387200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.387251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.387449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.387478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.387821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.387851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.388126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.388155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.388333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.388364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.388609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.388640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.388873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.388902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.389213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.389250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.389586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.389617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.389866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.389895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.390223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.390261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.390441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.390469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.390720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.390749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.391082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.391112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.391422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.391452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.391764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.391835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.392211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.392258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.392565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.392595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.392774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.392804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.393050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.393080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.393382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.393413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.393600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.393631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.393834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.649 [2024-07-15 23:53:16.393863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.649 qpair failed and we were unable to recover it. 00:27:27.649 [2024-07-15 23:53:16.394159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.394188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.394503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.394533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.394727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.394757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.395016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.395046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.395361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.395392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.395589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.395628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.395926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.395955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.396257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.396287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.396558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.396587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.396789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.396819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.397101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.397130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.397437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.397469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.397722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.397752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.398120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.398150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.398403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.398433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.398735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.398765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.399106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.399136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.399456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.399486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.399735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.399764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.400096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.400126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.400389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.400420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.400697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.400726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.401097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.401127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.401437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.401469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.401733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.401762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.402105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.402134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.402457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.402487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.402786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.402816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.403143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.403172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.403450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.403480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.403681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.403710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.403904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.403915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.404112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.404141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.404443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.404473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.404798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.404828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.405113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.405143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.405446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.405476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.405673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.405703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.405958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.406216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.406270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.406543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.650 [2024-07-15 23:53:16.406574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.650 qpair failed and we were unable to recover it. 00:27:27.650 [2024-07-15 23:53:16.406811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.406844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.407165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.407195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.407532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.407603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.407874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.407907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.408155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.408195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.408479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.408512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.408716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.408746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.409014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.409044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.409309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.409340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.409637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.409667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.409881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.409912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.410146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.410175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.410486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.410516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.410715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.410745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.410938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.410968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.411292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.411323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.411643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.411672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.411860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.411890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.412183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.412245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.412451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.412481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.412665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.412695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.413045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.413055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.413233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.413244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.413477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.413488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.413781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.413811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.414056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.414086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.414400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.414430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.414743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.414773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.415088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.415117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.415367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.415397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.415725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.415755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.416074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.416085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.416376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.416406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.416673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.416703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.417062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.417092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.417403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.417433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.417617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.417646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.417953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.417983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.418324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.418355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.418674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.418713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.651 qpair failed and we were unable to recover it. 00:27:27.651 [2024-07-15 23:53:16.418963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.651 [2024-07-15 23:53:16.418973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.419269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.419300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.419604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.419634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.419881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.419911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.420302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.420334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.420610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.420640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.420890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.420920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.421178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.421188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.421326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.421337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.421589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.421618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.421957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.421988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.422261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.422294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.422597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.422627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.422972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.423002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.423333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.423363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.423613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.423643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.423896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.423926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.424140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.424151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.424363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.424373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.424634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.424663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.424926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.424955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.425271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.425301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.425496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.425526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.425801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.425812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.426040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.426051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.426256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.426267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.426481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.426491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.426805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.426834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.427080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.427110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.427440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.427470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.427719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.427748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.428137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.428173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.428407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.428438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.428691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.428721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.428955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.428965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.429166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.652 [2024-07-15 23:53:16.429176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.652 qpair failed and we were unable to recover it. 00:27:27.652 [2024-07-15 23:53:16.429397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.429408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.429641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.429651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.429851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.429862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.430163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.430192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.430456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.430487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.430739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.430769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.431038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.431069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.431385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.431396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.431700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.431712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.431881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.431891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.432204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.432242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.432576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.432605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.432845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.432876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.433118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.433129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.433402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.433414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.433633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.433663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.434009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.434039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.434295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.434317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.434545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.434555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.434779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.434789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.435082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.435112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.435363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.435394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.435760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.435790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.436147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.436185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.436383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.436394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.436657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.436668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.436938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.436948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.437095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.437106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.437302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.437332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.437571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.437601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.437916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.437945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.438295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.438307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.438536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.438567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.438841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.438871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.439174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.439203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.439533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.439568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.439922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.439953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.440158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.440189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.440450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.440481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.440682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.440712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.441077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.653 [2024-07-15 23:53:16.441088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.653 qpair failed and we were unable to recover it. 00:27:27.653 [2024-07-15 23:53:16.441219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.441233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.441525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.441555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.441861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.441891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.442173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.442203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.442520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.442551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.442804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.442834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.443078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.443107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.443383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.443414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.443635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.443665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.444025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.444056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.444452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.444483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.444814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.444845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.445119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.445129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.445371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.445401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.445589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.445619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.445825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.445836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.446131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.446161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.446420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.446450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.446694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.446723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.447126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.447156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.447415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.447446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.447767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.447797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.448111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.448139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.448403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.448435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.448767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.448797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.449065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.449094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.449437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.449468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.449715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.449745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.450036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.450066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.450416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.450447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.450671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.450701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.451039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.451069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.451391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.451421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.451682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.451712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.451908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.451943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.452136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.452166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.452440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.452471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.452779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.452810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.453152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.453182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.453391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.453422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.453632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.654 [2024-07-15 23:53:16.453662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.654 qpair failed and we were unable to recover it. 00:27:27.654 [2024-07-15 23:53:16.453870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.453900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.454235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.454247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.454460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.454472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.454734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.454745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.454963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.454993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.455330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.455361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.455546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.455576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.455928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.455958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.456210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.456248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.456508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.456539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.456750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.456779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.457022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.457033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.457229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.457241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.457466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.457477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.457676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.457705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.457946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.457977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.458258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.458289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.458503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.458533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.458835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.458865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.459117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.459146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.459391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.459402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.459605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.459616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.459888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.459921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.460159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.460190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.460486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.460518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.460833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.460863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.461160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.461191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.461427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.461458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.461646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.461676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.462038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.462069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.462373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.462404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.462647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.462677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.463029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.463058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.463410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.463447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.463642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.463672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.463881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.463912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.464243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.464275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.464478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.464509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.464692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.464722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.465085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.465114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.655 [2024-07-15 23:53:16.465346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.655 [2024-07-15 23:53:16.465357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.655 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.465581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.465592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.465835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.465865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.466176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.466206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.466501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.466512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.466805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.466836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.467158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.467188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.467489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.467502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.467748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.467779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.468022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.468051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.468322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.468334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.468604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.468634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.468888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.468917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.469151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.469163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.469434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.469446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.469671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.469682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.469846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.469857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.470144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.470155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.470482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.470513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.470841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.470871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.471236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.471267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.471596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.471628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.471955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.471985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.472311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.472323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.472496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.472507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.472776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.472807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.473049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.473059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.473205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.473215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.473466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.473477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.473705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.473715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.473911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.473921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.474188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.474198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.474428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.474440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.474654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.474668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.474812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.474835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.656 qpair failed and we were unable to recover it. 00:27:27.656 [2024-07-15 23:53:16.475058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.656 [2024-07-15 23:53:16.475070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.475367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.475397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.475648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.475679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.475864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.475894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.476271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.476303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.476499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.476530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.476737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.476767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.477104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.477115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.477352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.477363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.477567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.477578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.477846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.477856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.478070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.478081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.478372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.478384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.478650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.478661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.478878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.478888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.479095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.479106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.479466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.479498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.479754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.479784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.480149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.480179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.480481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.480513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.480843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.480873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.481112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.481142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.481419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.481430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.481623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.481634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.481923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.481933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.482131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.482143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.482389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.482419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.482734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.482765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.483085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.483116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.483397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.483408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.483699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.483729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.483997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.484027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.484274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.484286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.484509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.484521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.484747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.484759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.485066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.485096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.485347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.485379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.485669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.485699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.486026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.486061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.486315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.486326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.486467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.657 [2024-07-15 23:53:16.486479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.657 qpair failed and we were unable to recover it. 00:27:27.657 [2024-07-15 23:53:16.486689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.486719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.487012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.487043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.487293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.487325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.487583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.487612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.487898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.487928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.488177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.488187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.488392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.488405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.488629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.488659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.488864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.488894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.489129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.489159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.489438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.489470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.489826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.489855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.490181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.490212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.490526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.490557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.490803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.490834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.491241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.491272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.491444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.491455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.491649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.491660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.491982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.492012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.492328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.492361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.492587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.492599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.492750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.492761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.493010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.493021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.493319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.493330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.493493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.493504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.493698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.493709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.494022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.494052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.494387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.494419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.494668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.494698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.495079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.495110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.495430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.495441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.495662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.495672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.495814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.495825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.496111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.496142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.496473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.496505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.496863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.496893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.497166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.497205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.497459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.497473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.497623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.497634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.497857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.497887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.498119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.658 [2024-07-15 23:53:16.498130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.658 qpair failed and we were unable to recover it. 00:27:27.658 [2024-07-15 23:53:16.498389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.498421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.498610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.498643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.499049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.499079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.499275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.499308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.499643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.499673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.499891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.499921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.500196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.500241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.500461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.500473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.500696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.500726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.501081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.501112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.501300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.501311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.501591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.501602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.501758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.501769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.502011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.502021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.502212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.502223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.502553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.502585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.502853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.502883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.503264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.503296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.503626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.503656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.503941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.503971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.504286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.504320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.504579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.504609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.504864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.504895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.505279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.505310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.505665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.505696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.505976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.506007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.506310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.506322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.506466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.506478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.506698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.506729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.507020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.507050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.507234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.507245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.507432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.507463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.507824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.507855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.508190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.508220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.508440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.508471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.508730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.508760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.509120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.509156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.509437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.509468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.509655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.509685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.510042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.659 [2024-07-15 23:53:16.510072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.659 qpair failed and we were unable to recover it. 00:27:27.659 [2024-07-15 23:53:16.510406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.510438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.510686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.510716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.510898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.510929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.511278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.511310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.511637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.511667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.511932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.511962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.512314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.512346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.512607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.512637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.512880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.512911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.513096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.513106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.513365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.513396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.513651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.513680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.513963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.513993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.514320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.514353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.514544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.514574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.514842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.514873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.515132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.515162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.515488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.515499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.515683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.515694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.515910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.515940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.516294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.516325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.516523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.516534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.516747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.516778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.517118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.517149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.517464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.517474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.517793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.517804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.518129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.518159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.518413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.518445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.518716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.518747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.519015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.519045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.519363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.519395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.519719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.519749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.519986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.520016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.520337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.520350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.520548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.660 [2024-07-15 23:53:16.520578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.660 qpair failed and we were unable to recover it. 00:27:27.660 [2024-07-15 23:53:16.520833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.520864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.521182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.521217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.521486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.521517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.521790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.521821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.522049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.522059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.522375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.522407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.522661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.522691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.523070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.523101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.523334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.523346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.523564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.523594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.523873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.523902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.524098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.524109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.524416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.524428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.524649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.524659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.524962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.524992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.525254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.525285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.525594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.525624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.525960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.525989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.526323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.526334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.526554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.526566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.526781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.526791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.526930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.526942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.527214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.527227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.527442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.527453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.527666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.527677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.527897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.527907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.528117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.528127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.528427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.528440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.528648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.528660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.528821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.528832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.529122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.529153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.529435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.529466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.529772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.529802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.529976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.530007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.530317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.530350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.530571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.530601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.530860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.530890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.531254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.531285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.531569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.531600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.531861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.531891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.532144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.661 [2024-07-15 23:53:16.532174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.661 qpair failed and we were unable to recover it. 00:27:27.661 [2024-07-15 23:53:16.532468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.532483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.532781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.532811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.533147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.533177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.533500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.533532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.533845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.533875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.534190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.534220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.534563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.534595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.534843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.534873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.535131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.535161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.535409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.535420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.535636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.535646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.535784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.535794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.536108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.536138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.536456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.536488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.536741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.536772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.537022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.537052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.537376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.537406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.537640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.537671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.537996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.538027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.538289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.538320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.538575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.538605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.538851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.538881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.539250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.539282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.539539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.539569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.539775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.539805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.540139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.540169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.540418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.540431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.540741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.540773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.541039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.541069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.541388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.541419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.541614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.541644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.541971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.542001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.542337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.542368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.542700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.542711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.542882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.542893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.543110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.543139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.543396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.543407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.543570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.543581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.543748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.543778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.544093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.544123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.544435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.662 [2024-07-15 23:53:16.544472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.662 qpair failed and we were unable to recover it. 00:27:27.662 [2024-07-15 23:53:16.544680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.544709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.544923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.544953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.545214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.545254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.545553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.545563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.545866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.545896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.546157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.546188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.546445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.546477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.546739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.546769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.547134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.547164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.547438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.547470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.547733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.547763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.548128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.548158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.548471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.548503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.548821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.548852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.549181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.549212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.549560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.549590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.549797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.549827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.550067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.550100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.550409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.550441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.550699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.550710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.550999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.551010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.551269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.551300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.551506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.551535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.551794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.551825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.552156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.552186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.552533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.552564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.552822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.552854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.553184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.553215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.553407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.553438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.553719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.553749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.554097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.554129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.554322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.554353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.554612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.554643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.556045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.556075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.556382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.556398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.556687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.556711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.556998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.557030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.557341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.557372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.557566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.557597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.557800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.557838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.663 [2024-07-15 23:53:16.558144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.663 [2024-07-15 23:53:16.558188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.663 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.558452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.558464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.558609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.558620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.558831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.558842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.559063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.559094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.559349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.559384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.559592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.559623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.559891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.559921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.560238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.560283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.560501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.560513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.560733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.560743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.561076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.561087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.561421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.561453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.561739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.561770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.562104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.562135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.562325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.562337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.562481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.562491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.562655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.562667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.562975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.563005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.563373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.563404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.563651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.563662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.563817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.563828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.564028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.564040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.564300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.564312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.564549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.564561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.564717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.564730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.565032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.565064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.565325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.565356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.565600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.565877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.565890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.566162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.566176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.566455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.566479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.566803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.566836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.567206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.567272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.567446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.567466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.664 [2024-07-15 23:53:16.567699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.664 [2024-07-15 23:53:16.567714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.664 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.567919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.567951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.568288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.568325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.568640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.568653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.568879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.568892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.569132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.569144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.569441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.569475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.569678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.569709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.570072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.570103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.570331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.570364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.570704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.570736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.571051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.571082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.571286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.571319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.571556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.571587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.571846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.571878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.572212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.572263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.572489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.572520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.572901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.572933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.573267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.573300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.573484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.573495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.573659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.573691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.573950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.573981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.574250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.574283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.574598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.574629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.574890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.574921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.575180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.575192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.575470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.575482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.575782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.575813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.576047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.576078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.576411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.576424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.576646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.576658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.576812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.576825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.577056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.577069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.577261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.577286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.577502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.577513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.577705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.577716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.578001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.578032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.578272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.578306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.578556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.578587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.578779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.578811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.579001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.579033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.665 [2024-07-15 23:53:16.579216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.665 [2024-07-15 23:53:16.579293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.665 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.579556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.579588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.579846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.579877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.580069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.580100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.580288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.580323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.580651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.580679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.580865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.580896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.581131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.581162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.581348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.581361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.581600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.581612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.581764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.581775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.581939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.581950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.582159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.582182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.582323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.582335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.582540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.582572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.582744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.582776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.583039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.583070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.583371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.583405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.583577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.583608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.583807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.583838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.584028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.584059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.584394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.584435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.584564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.584576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.584788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.584799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.585021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.585053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.585248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.585282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.585472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.585503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.585755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.585778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.586035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.586046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.586196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.586209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.586364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.586378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.586538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.586549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.586840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.586869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.587047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.587078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.587263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.587276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.587570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.587601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.587791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.587822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.588064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.588095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.588356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.588389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.588579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.588610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.588847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.666 [2024-07-15 23:53:16.588878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.666 qpair failed and we were unable to recover it. 00:27:27.666 [2024-07-15 23:53:16.589061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.589092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.589397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.589409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.589625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.589636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.589775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.589786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.590031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.590062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.590310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.590322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.590594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.590606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.590813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.590824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.590958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.590969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.591111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.591123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.591365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.591377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.591528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.591539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.591679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.591826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.591837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.592004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.592035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.592277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.592309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.592572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.592604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.592856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.592886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.593055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.593100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.593242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.593255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.593387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.593399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.593552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.593563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.593690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.593701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.593841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.593853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.594062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.594093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.594332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.594364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.594690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.594721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.594918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.594949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.595162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.595174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.667 [2024-07-15 23:53:16.595321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.667 [2024-07-15 23:53:16.595335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.667 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.595602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.595614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.595760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.595772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.595975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.595986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.596176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.596188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.596327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.596338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.596644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.596655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.596803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.596815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.597081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.597092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.597289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.597300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.597438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.597449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.597610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.597620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.597778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.597789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.598038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.598049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.598251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.598262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.598454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.598464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.598679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.598691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.598896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.598908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.599006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-07-15 23:53:16.599017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-07-15 23:53:16.599301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.599313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.599450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.599484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.599621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.599651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.599972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.600002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.600241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.600273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.600538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.600550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.600765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.600777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.600913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.600925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.601131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.601161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.601411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.601443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.601629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.601660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.601831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.601862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.602106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.602136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.602415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.602447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.602771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.602802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.602987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.603017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.603251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.603282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.603423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.603434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.603612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.603643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.603902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.603931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.604110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.604140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.604490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.604533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.604807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.604838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.605170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.605201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.605477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.605509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.605633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.605663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.605917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.605947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.606141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-07-15 23:53:16.606171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-07-15 23:53:16.606423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.606454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.606656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.606686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.606936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.606967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.607133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.607163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.607347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.607359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.607552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.607582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.607751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.607781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.608023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.608053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.608364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.608396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.608587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.608618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.608791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.608822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.609075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.609105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.609268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.609280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.609476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.609506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.609691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.609721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.609903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.609933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.610132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.610162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.610402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.610413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.610571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.610583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.610717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.610727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.610846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.610884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.611070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.611100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.611304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.611335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.611504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.611516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.611637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.611648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.611764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.611774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.611971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.612001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-07-15 23:53:16.612172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-07-15 23:53:16.612202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.612477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.612509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.612675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.612705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.612832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.612862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.613054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.613084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.613260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.613291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.613467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.613503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.613665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.613696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.613875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.613905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.614141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.614171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.614479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.614512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.614750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.614781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.614951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.614980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.615143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.615173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.615339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.615350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.615563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.615574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.615793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.615804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.616025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.616035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.616204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.616214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.616455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.616486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.616650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.616681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.617063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.617093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.617328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.617360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.617539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.617550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.617757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.617787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.618024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.618055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.618315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.618346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-07-15 23:53:16.618603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-07-15 23:53:16.618634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.618979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.619009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.619197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.619238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.619515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.619546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.619742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.619773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.620032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.620062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.620423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.620455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.620661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.620691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.620874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.620904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.621094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.621124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.621419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.621447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.621706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.621737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.622073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.622103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.622352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.622363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.622625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.622656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.622887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.622917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.623169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.623199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.623392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.623402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.623600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.623630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.623969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.624005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.624312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.624343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.624582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.624612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.624926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.624955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.625287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.625319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.625506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.625536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.625834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.625864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.626187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.626217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.626436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-07-15 23:53:16.626467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-07-15 23:53:16.626723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.626753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.627026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.627056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.627373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.627403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.627671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.627682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.628002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.628032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.628350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.628382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.628674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.628685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.628815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.628826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.629135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.629146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.629393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.629404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.629549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.629560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.629759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.629770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.629906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.629917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.630194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.630205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.630416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.630427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.630615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.630625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.630831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.630842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.631047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.631077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.631346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.631377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.631629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.631660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.632000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.632030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.632350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.632383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.632636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.632668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.632830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.632842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.633124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.633153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.633418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.633450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.633810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.633840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-07-15 23:53:16.634165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-07-15 23:53:16.634195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.634453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.634491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.634708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.634723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.634945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.634959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.635169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.635187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.635385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.635399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.635569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.635598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.635872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.635902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.636221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.636262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.636469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.636499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.636762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.636792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.637092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.637122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.637456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.637487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.637723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.637753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.638102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.638132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.638455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.638485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.638690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.638704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.638974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.638988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.639307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.639338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.639592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-07-15 23:53:16.639622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-07-15 23:53:16.639876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.639891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.640087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.640100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.640297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.640311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.640595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.640625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.640941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.640971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.641285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.641315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.641514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.641543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.641817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.641847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.642098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.642127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.642382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.642413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.642695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.642709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.643002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.643017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.643283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.643298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.643590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.643605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.643874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.643888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.644125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.644155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.644331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.644361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.644594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.644624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.644957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.644996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.645301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.645331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.645603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.645632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.645875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.645905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.646207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.646245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.646561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.646591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.646841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.646859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.647056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.647070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.647278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.647293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-07-15 23:53:16.647560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-07-15 23:53:16.647574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.647786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.647800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.648106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.648120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.648403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.648419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.648669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.648699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.649094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.649124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.649325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.649356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.649717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.649746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.650111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.650141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.650459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.650474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.650769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.650798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.651114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.651144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.651404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.651434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.651742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.651772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.652100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.652130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.652442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.652457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.652779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.652809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.653139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.653169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.653456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.653487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.653793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.653823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.654146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.654175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.654427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.654442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.654667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.654697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.654933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.654962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.655239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.655270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.655606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.655635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.655970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.656000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.656335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-07-15 23:53:16.656366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-07-15 23:53:16.656651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.656681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.656944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.656973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.657237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.657268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.657573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.657603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.657934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.657964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.658154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.658183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.658513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.658544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.658881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.658910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.659215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.659252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.659502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.659537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.659888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.659902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.660122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.660137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.660420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.660435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.660719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.660749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.660928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.660958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.661208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.661252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.661602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.661632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.661912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.661941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.662290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.662321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.662559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.662589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.662923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.662937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.663170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.663184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.663405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.663420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.663714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.663746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.663981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.664011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.664355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.664387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.664672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.664702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.665049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-07-15 23:53:16.665079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-07-15 23:53:16.665339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.665369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.665626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.665656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.665984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.666014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.666252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.666283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.666535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.666549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.666847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.666877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.667232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.667263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.667514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.667544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.667871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.667902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.668137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.668167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.668455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.668487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.668812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.668827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.669130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.669160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.669490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.669521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.669849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.669863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.670102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.670116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.670281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.670296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.670591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.670621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.670930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.670960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.671281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.671312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.671623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.671653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.671972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.672007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.672329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.672353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.672658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.672688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.673028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.673059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.673344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.673375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.673681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.673711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-07-15 23:53:16.673948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-07-15 23:53:16.673979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.674307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.674339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.674668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.674698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.675026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.675057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.675392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.675423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.675624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.675654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.675982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.676012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.676353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.676384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.676697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.676728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.677044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.677074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.677389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.677420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.677744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.677774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.678032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.678062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.678338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.678377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.678581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.678595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.678882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.678912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.679096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.679126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.679453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.679485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.679742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.679757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.680025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.680040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.680386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.680417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.680660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.680691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.681012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.681042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.681372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.681404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.681733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.681763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.682092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.682123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.682454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.682486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.682822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-07-15 23:53:16.682852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-07-15 23:53:16.683160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.683190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.683512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.683543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.683867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.683897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.684235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.684266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.684573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.684604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.684864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.684894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.685254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.685291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.685552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.685583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.685839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.685869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.686049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.686080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.686333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.686364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.686715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.686729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.687021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.687051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.687409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.687440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.687677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.687691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.688012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.688043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.688292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.688324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.688650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.688680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.689016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.689046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.689295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.689327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.689660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.689691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.689997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.690027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.690349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.690381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.690692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.690722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-07-15 23:53:16.691040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-07-15 23:53:16.691070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.691327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.691359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.691615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.691630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.691828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.691842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.692143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.692173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.692450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.692481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.692836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.692866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.693192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.693222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.693525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.693555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.693847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.693879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.694216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.694257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.694590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.694620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.694951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.694981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.695287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.695318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.695575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.695605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.695908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.695938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.696255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.696286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.696598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.696629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.696948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.696978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.697315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.697346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.697673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.697703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.698029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.698059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.698387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.698428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-07-15 23:53:16.698669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-07-15 23:53:16.698687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.698913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.698928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.699154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.699168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.699317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.699332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.699613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.699643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.699928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.699959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.700268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.700300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.700626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.700657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.700977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.700992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.701314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.701328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.701533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.701547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.701807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.701837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.702071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.702101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.702448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.702480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.702732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.702763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.703071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.703101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.703447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.703479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.703797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.703827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.704177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.704207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.704557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.704588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.704827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.704857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.705189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.705220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.705481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.705512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.705777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.705807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.706160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.706190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.706566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.706597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.706933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.706964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.707149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.707178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-07-15 23:53:16.707512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-07-15 23:53:16.707545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.707870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.707884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.708121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.708151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.708458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.708489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.708823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.708838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.709041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.709055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.709277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.709291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.709533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.709547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.709823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.709838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.710133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.710163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.710480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.710511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.710827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.710857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.711138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.711168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.711507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.711539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.711815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.711845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.712165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.712195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.712548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.712580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.712885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.712915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.713245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.713277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.713604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.713635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.713916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.713946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.714252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.714283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.714541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.714571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.714813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.715082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.715122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.715469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.715500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.715835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.715865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.716195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-07-15 23:53:16.716243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-07-15 23:53:16.716484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.716514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.716846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.716876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.717075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.717105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.717433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.717465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.717775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.717805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.718124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.718153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.718467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.718498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.718800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.718830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.719103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.719133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.719467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.719498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.719834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.719869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.720202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.720252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.720586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.720615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.720849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.720878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.721122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.721152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.721466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.721497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.721769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.721799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.722149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.722179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.722535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.722566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.722883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.722913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.723247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.723278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.723616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.723647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.723965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.723995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.724307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.724338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.724608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.724622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.724979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.725009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.725264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.725295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-07-15 23:53:16.725625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-07-15 23:53:16.725655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.725982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.726012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.726310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.726341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.726647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.726677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.726876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.726905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.727245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.727277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.727535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.727564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.727845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.727875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.728177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.728192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.728432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.728447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.728664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.728679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.728978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.729008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.729247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.729279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.729585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.729615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.729934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.729965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.730274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.730305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.730487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.730517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.730872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.730913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.731126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.731140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.731439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.731470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.731726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.731756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.732109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.732139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.732381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.732413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.732743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.732778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.733055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.733085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.733346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.733377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.733609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.733638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.733952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-07-15 23:53:16.733981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-07-15 23:53:16.734290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.734321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.734645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.734674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.734984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.735014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.735289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.735320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.735672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.735701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.735999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.736029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.736292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.736324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.736600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.736630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.736980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.737010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.737323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.737354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.737687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.737717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.738022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.738052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.738251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.738282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.738515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.738545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.738778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.738808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.739040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.739070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.739332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.739362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.739690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.739720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.740078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.740109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.740422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.740453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.740760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.740790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.741111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.741126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.741416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.741448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.741753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.741783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.742109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.742124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.742335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.742350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.742642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.742672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-07-15 23:53:16.742838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-07-15 23:53:16.742866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.743170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.743202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.743465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.743497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.743800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.743817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.743961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.743977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.744268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.744285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.744614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.744629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.744857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.744886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.745260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.745299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.745549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.745580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.745844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.745876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.746209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.746249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.746582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.746614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.746925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.746958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.747292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.747325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.747648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.747681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.747923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.747955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.748215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.748272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.748593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.748625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.748882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.748915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.749236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.749271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-07-15 23:53:16.749600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-07-15 23:53:16.749617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.749821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.749839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.750138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.750169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.750439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.750473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.750815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.750847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.751156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.751189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.751485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.751518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.751855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.751887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.752219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.752263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.752522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.752554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.752835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.752867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.753201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.753246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.753573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.753606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.753934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.753966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.754304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.754339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.754665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.754697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.755039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.755056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.755353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.755370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.755615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.755646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.755888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.755920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.756188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.756220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.756582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.756614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.756825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.756857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.757201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.757218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.757571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.757603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.757912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.757944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.758214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.758238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-07-15 23:53:16.758469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-07-15 23:53:16.758511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.758848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.758879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.759151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.759184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.759442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.759475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.759748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.759780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.760034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.760051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.760333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.760367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.760606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.760637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.760960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.761008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.761325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.761358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.761607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.761639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.761950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.761981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.762240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.762274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.762604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.762637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.762892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.762925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.763160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.763192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.763473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.763507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.763840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.763873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.764202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.764253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.764511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.764543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.764893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.764926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.765296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.765329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.765663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.765695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.765984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.766016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.766355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.766389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.766718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.766749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.767101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.767133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.767473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.767514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-07-15 23:53:16.767814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-07-15 23:53:16.767846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.768151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.768183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.768501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.768536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.768792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.768824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.769128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.769145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.769477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.769510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.769822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.769855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.770108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.770140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.770498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.770531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.770866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.770898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.771204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.771246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.771560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.771593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.771904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.771942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.772186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.772218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.772590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.772622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.772957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.772989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.773299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.773332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.773652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.773684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.773991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.774023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.774345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.774378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.774638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.774670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.774976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.774993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.775207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.775229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.775378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.775395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.775672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.775704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.776045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.776077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.776420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.776467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.776799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-07-15 23:53:16.776815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-07-15 23:53:16.777043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.777061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.777362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.777396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.777597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.777629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.777938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.777970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.778306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.778340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.778626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.778659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.779009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.779043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.779375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.779408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.779594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.779626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.779965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.779997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.780327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.780360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.780700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.780733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.781063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.781095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.781334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.781367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.781643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.781685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.781887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.781903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.782203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.782243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.782580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.782612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.782911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.782943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.783199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.783241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.783505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.783537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.783782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.783827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.784049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.784065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.784349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.784383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.784720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.784758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.785089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.785106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.785305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.785323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-07-15 23:53:16.785542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-07-15 23:53:16.785559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.785860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.785892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.786245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.786277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.786550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.786583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.786837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.786869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.787217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.787239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.787522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.787538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.787774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.787791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.788081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.788113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.788353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.788386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.788663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.788704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.788905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.788922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.789209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.789266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.789503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.789535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.789799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.789831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.790160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.790192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.790484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.790518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.790850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.790883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.791085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.791102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.791385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.791419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.791770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.791803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.792062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.792094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.792404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.792438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.792695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.792743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.792972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.792989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.793284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.793302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.793579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.793611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.793962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.793995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.794254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.794288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-07-15 23:53:16.794538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-07-15 23:53:16.794570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.794928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.794960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.795305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.795339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.795600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.795632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.795935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.795967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.796290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.796324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.796658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.796691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.796925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.796957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.797199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.797218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.797397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.797414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.797579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.797595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.797821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.797852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.798171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.798202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.798526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.798559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.798892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.798929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.799258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.799294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.799601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.799642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.799949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.799981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.800292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.800326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.800563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.800595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.800967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.801000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.801341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.801722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.801754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-07-15 23:53:16.802004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-07-15 23:53:16.802021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.802292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.802309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.802528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.802544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.802843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.802860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.803157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.803173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.803400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.803418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.803670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.803706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.804043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.804074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.804408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.804441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.804702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.804735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.804993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.805010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.805311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.805345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.805618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.805651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.805937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.805970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.806307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.806340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.806601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.806633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.806949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.806981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.807161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.807178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.807411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.807445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.807748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.807781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.808038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.808054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.808348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.808366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.808580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.808615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.808933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.808965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.809292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.809326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.809502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.809539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.809892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.809925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.810251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.810285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.810532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.810564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.810828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.810859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-07-15 23:53:16.811167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-07-15 23:53:16.811199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.811465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.811499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.811854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.811885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.812145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.812177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.812496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.812529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.812789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.812821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.813171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.813203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.813566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.813600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.813866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.813898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.814248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.814282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.814533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.814566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.814827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.814859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.815119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.815152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.815430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.815464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.815709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.815741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.816071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.816104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.816308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.816342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.816649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.816682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.817004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.817037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.817361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.817394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.817729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.817762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.818088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.818104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.818265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.818282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.818584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.818617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.818937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.818969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.819293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.819327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.819662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.819695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.820024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.820057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.820316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.820349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.820709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.820741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-07-15 23:53:16.821011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-07-15 23:53:16.821042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.821368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.821401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.821710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.821743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.822054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.822071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.822346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.822364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.822660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.822683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.822976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.823008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.823259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.823292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.823536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.823567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.823839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.823871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.824177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.824209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.824456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.824487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.824739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.824770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.825071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.825088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.825388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.825405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.825564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.825597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.825907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.825923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.826159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.826176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.826454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.826472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.826750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.826767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.827066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.827110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.827446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.827480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.827757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.827789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.828104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.828135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.828372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.828405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.828686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.828719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.829049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.829082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.829422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.829456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.829789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.829821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.830148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.830165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-07-15 23:53:16.830500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-07-15 23:53:16.830534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.830870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.830903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.831146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.831179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.831537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.831570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.831840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.831872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.832215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.832264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.832575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.832606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.832913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.832945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.833267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.833301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.833636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.833668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.833978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.834019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.834248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.834266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.834563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.834579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.834877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.834894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.835148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.835180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.835499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.835537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.835845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.835878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.836191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.836223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.836473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.836505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.836860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.836893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.837207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.837249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.837589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.837621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.837928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.837960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.838238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.838255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.838528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.838545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.838760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.838776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.839076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.839108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.839445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.839479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.839804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.839836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.840148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-07-15 23:53:16.840181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-07-15 23:53:16.840441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.840475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.840830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.840862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.841187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.841220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.841492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.841525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.841828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.841860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.842135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.842167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.842350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.842383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.842637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.842669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.842922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.842954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.843199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.843216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.843459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.843476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.843777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.843821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.844077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.844093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.844406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.844441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.844703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.844737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.845044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.845076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.845397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.845430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.845763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.845796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.846043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.846075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.846403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.846437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.846772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.846805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.847114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.847147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.847482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.847515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.847844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.847876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.848132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.848164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.848543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.848581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.848929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.848962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.849152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.849195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.849505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.849539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.849831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.849863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.850189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.850220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.850493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.850525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.850764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.850796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-07-15 23:53:16.851039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-07-15 23:53:16.851056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.851372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.851390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.851639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.851671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.851932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.851965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.852317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.852349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.852540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.852572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.852847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.852879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.853140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.853173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.853463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.853497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.853770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.853801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.854128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.854160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.854472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.854505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.854741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.854773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.855100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.855132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.855462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.855496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.855777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.855809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.856128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.856160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.856485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.856519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.856850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.856882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.857125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.857157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.857425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.857458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.857800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.857832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.858090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.858122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.858475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.858507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.858688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.858721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.858994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.859025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.859245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.859278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.859599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.859631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.859868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.859901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.860248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.860282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.860616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.860649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.860978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.861010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.861194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.861242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.861481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.861513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.861766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.861798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.861992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.862024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.862335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.862368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.862682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.862714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.862948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.862965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.863211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.863234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-07-15 23:53:16.863560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-07-15 23:53:16.863593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.863832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.863864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.864147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.864179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.864527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.864561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.864893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.864925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.865101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.865133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.865399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.865433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.865768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.865801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.866032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.866065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.866348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.866382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.866721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.866753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.867084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.867116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.867442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.867476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.867707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.867738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.867998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.868030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.868272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.868289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.868576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.868609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.868881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.868913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.869173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.869205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.869540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.869572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.869906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.869938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.870204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.870246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.870592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.870624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.870900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.870933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.871211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.871253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.871592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.871625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.871957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.871989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.872328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.872346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.872644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.872661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.872873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.872890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.873234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.873268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.873604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.873636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.873886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.873919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.874241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.874274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.874606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.874638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.874946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.874978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.875297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.875330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.875638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.875671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.875994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.876010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.876330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.876364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.876683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.876716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.876888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.876921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.877248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.877282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.877617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.877649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.877892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.877925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.878189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.878206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.878510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.878543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.878820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-07-15 23:53:16.878852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-07-15 23:53:16.879107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.879140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.879390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.879423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.879787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.879819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.880189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.880221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.880603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.880636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.880966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.880997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.881331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.881348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.881654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.881687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.882021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.882053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.882304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.882321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.882611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.882643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.882952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.882990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.883306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.883340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.883602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.883634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.883998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.884039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.884287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.884304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.884635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.884667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.884908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.884939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.885273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.885306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.885514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.885546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.885878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.885910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.886233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.886266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.886599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.886631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.886969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.887003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.887332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.887366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.887623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.887656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.888011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.888044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.888213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.888257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.888566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.888598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.888859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.888891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.889259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.889292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.889603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.889635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.889933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.889966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.890223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.890264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.890543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.890575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.890819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.890851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.891183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.891215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.891545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.891577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.891841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.891873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.892244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.892283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.892497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.892514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.892815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.892847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.893170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.893203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.893494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.893526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.893860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.893893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.894202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.894246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.894577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.894610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-07-15 23:53:16.894943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-07-15 23:53:16.894975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.895256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.895273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.895568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.895585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.895730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.895747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.895951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.895971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.896295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.896331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.896645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.896677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.897013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.897045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.897287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.897321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.897653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.897687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.897937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.897968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.898231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.898248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.898461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.898477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.898748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.898764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.899066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.899084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.899266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.899284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.899496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.899512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.899783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.899801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-07-15 23:53:16.900102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-07-15 23:53:16.900118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.900280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.900298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.900632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.900649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.900808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.900840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.901031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.901063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.901334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.901367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.901688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.901720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.901949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.901981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.902247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.902283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.902472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.902504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.902803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.902836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.903075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.903107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.903428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.903445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.903686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.903703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.903950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.903982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.904309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.904326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.904533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.904551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.904776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.904792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.905135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.905152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.905365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.905383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.905658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.905675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.905906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.905923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.906157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.906173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.906496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.906513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.906832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.906849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.907083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.907100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.907384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.907404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.907680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.907712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.908069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.908101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.908344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.259 [2024-07-15 23:53:16.908361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.259 qpair failed and we were unable to recover it. 00:27:28.259 [2024-07-15 23:53:16.908682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.908714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.908971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.909003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.909357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.909391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.909630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.909662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.909917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.909949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.910297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.910330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.910674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.910707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.911037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.911070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.911251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.911285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.911622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.911654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.911991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.912023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.912326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.912362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.912604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.912636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.912876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.912908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.913268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.913301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.913559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.913591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.913852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.913885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.914123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.914139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.914418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.914451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.914751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.914783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.915061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.915092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.915431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.915465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.915739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.915772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.916045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.916093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.916390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.916407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.916691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.916724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.916978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.917011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.917303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.917320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.917543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.917559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.917834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.917851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.918146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.918163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.918386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.918404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.918733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.918765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.919098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.919130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.919457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.919474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.919707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.919738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.920071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.920108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.920366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.920383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.920656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.920673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.920891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.260 [2024-07-15 23:53:16.920908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.260 qpair failed and we were unable to recover it. 00:27:28.260 [2024-07-15 23:53:16.921117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.921134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.921405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.921421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.921718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.921750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.922054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.922087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.922411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.922444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.922773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.922805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.923060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.923092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.923335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.923369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.923675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.923707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.924055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.924087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.924332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.924366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.924695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.924728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.925072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.925105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.925421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.925438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.925734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.925752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.925967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.926000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.926329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.926363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.926632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.926649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.926808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.926826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.926956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.926974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.927108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.927125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.927342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.927360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.927552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.927585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.927898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.927930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.928242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.928260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.928448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.928466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.928614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.928632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.928849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.928881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.929120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.929152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.929509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.929543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.929875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.929907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.930178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.930209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.930466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.930499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.930765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.930797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.931060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.931092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.931328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.261 [2024-07-15 23:53:16.931346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.261 qpair failed and we were unable to recover it. 00:27:28.261 [2024-07-15 23:53:16.931546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.931566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.931794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.931826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.932156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.932189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.932494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.932511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.932811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.932843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.933179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.933210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.933466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.933483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.933703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.933720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.934017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.934061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.934361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.934394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.934639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.934670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.934860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.934892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.935222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.935266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.935491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.935508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.935815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.935848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.936169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.936202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.936520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.936538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.936844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.936876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.937119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.937152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.937483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.937519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.937853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.937886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.938216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.938259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.938591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.938623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.938865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.938897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.939239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.939273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.939552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.939585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.939926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.939959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.940246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.940279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.940614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.940646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.940899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.940931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.941248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.941283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.941615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.941646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.941902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.941933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.942163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.942195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.942405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.942439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.942754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.942771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.943065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.943083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.943294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.943337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1164098 Killed "${NVMF_APP[@]}" "$@" 00:27:28.262 [2024-07-15 23:53:16.943671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.943705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.943957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.262 [2024-07-15 23:53:16.943989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.262 qpair failed and we were unable to recover it. 00:27:28.262 [2024-07-15 23:53:16.944236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.944254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:28.263 [2024-07-15 23:53:16.944395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.944413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.944661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.944678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:28.263 [2024-07-15 23:53:16.944975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.944993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:28.263 [2024-07-15 23:53:16.945271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.945289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:28.263 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.263 [2024-07-15 23:53:16.945590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.945608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.945834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.945850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.946125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.946143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.946416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.946434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.946688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.946706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.946985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.947003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.947257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.947308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.947611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.947630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.947929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.947947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.948155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.948173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.948401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.948418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.948656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.948672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.948868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.948885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.949106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.949123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.949344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.949361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.949696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.949713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.950007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.950025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.950240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.950258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.950549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.950565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.950718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.950734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.951034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.951051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.951311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.951327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.951528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.951544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.951689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.263 [2024-07-15 23:53:16.951705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.263 qpair failed and we were unable to recover it. 00:27:28.263 [2024-07-15 23:53:16.951907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.951924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.952128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.952145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.952374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.952390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.952602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.952618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1164982 00:27:28.264 [2024-07-15 23:53:16.952913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.952931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1164982 00:27:28.264 [2024-07-15 23:53:16.953232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.953250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1164982 ']' 00:27:28.264 [2024-07-15 23:53:16.953486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.953503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.264 [2024-07-15 23:53:16.953803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.953820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:28.264 [2024-07-15 23:53:16.954094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.954111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.264 [2024-07-15 23:53:16.954427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.954445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:28.264 [2024-07-15 23:53:16.954668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.954687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 23:53:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.264 [2024-07-15 23:53:16.954858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.954876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.955118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.955135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.955355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.955372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.955646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.955664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.955899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.955915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.956070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.956087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.956361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.956378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.956670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.956689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.956960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.956977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.957201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.957219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.957521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.957536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.957756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.957771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.957986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.958003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.958231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.958248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.958466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.958483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.958716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.958732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.958932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.958948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.959234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.959251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.959541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.959557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.959793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.959812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.960024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.960042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.960340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.960358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.960562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.960579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.960850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.960867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.961100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.961115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.264 [2024-07-15 23:53:16.961382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.264 [2024-07-15 23:53:16.961400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.264 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.961613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.961631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.961904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.961920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.962121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.962138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.962359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.962375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.962643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.962660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.962873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.962890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.963037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.963053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.963196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.963213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.963451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.963470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.963669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.963684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.963907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.963924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.964215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.964241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.964509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.964527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.964822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.964839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.965098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.965114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.965348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.965366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.965662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.965678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.965974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.965990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.966200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.966217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.966499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.966515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.966718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.966734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.966882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.966898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.967191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.967207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.967359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.967376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.967679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.967695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.967837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.967854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.968122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.968138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.968410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.968427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.968716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.968732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.968955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.968971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.969182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.969199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.969541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.969559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.969797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.969814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.970148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.970165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.970375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.970394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.970687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.970704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.970927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.970944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.971238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.971255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.971568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.971587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.971889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.971906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.265 [2024-07-15 23:53:16.972129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.265 [2024-07-15 23:53:16.972145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.265 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.972459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.972476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.972772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.972788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.972950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.972967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.973173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.973190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.973330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.973347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.973551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.973568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.973775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.973792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.974110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.974126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.974375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.974420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.974713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.974735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.975042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.975058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.975342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.975360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.975512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.975529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.975795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.975811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.976028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.976044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.976245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.976262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.976539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.976555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.976705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.976721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.976951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.976967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.977214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.977237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.977501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.977518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.977727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.977748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.977996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.978012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.978222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.978246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.978565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.978582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.978888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.978904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.979221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.979243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.979482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.979499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.979701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.979718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.980002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.980020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.980223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.980246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.980445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.980463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.980672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.980688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.980927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.980944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.981259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.981276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.266 qpair failed and we were unable to recover it. 00:27:28.266 [2024-07-15 23:53:16.981527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.266 [2024-07-15 23:53:16.981545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.981839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.981856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.982053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.982070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.982264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.982281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.982502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.982519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.982803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.982820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.983089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.983105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.983309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.983326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.983555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.983572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.983790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.983806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.984116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.984133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.984354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.984379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.984616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.984633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.984851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.984871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.985087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.985104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.985346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.985363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.985580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.985597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.985829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.985845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.986144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.986160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.986447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.986463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.986705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.986722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.986915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.986931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.987163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.987179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.987412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.987428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.987718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.987735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.987873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.987889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.988099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.988114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.988257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.988274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.988426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.988441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.988716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.988733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.988994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.989010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.989302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.989319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.989524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.989540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.989800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.989817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.990033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.990050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.990264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.990282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.990523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.990539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.990691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.990707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.990943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.990959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.991177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.991193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.991334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.991354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.267 qpair failed and we were unable to recover it. 00:27:28.267 [2024-07-15 23:53:16.991639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.267 [2024-07-15 23:53:16.991656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.991930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.991950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.992245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.992262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.992417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.992433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.992651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.992667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.992945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.992962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.993177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.993193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.993338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.993354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.993511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.993527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.993822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.993841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.994073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.994090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.994302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.994319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.994579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.994595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.994751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.994767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.994987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.995003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.995147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.995162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.995375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.995392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.995597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.995613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.995894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.995910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.996058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.996074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.996264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.996281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.996535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.996551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.996734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.996750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.996962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.996978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.997197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.997214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.997445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.997461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.997724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.997750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.997879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.997895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.998094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.998110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.998334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.998351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.998508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.998525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.998809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.998825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.998954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.998970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.999113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.999128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.999340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.268 [2024-07-15 23:53:16.999356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.268 qpair failed and we were unable to recover it. 00:27:28.268 [2024-07-15 23:53:16.999636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:16.999652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:16.999789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:16.999805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.000000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.000015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.000210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.000230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.000426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.000442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.000641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.000656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.000865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.000882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.001096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.001112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.001310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.001326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.001456] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:27:28.269 [2024-07-15 23:53:17.001502] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.269 [2024-07-15 23:53:17.001613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.001628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.001765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.001779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.002039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.002052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.002270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.002287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.002433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.002449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.002685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.002700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.002908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.002924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.003197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.003212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.003431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.003451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.003594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.003611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.003825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.003841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.004127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.004143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.004282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.004299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.004607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.004624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.004890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.004906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.005046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.005062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.005325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.005341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.005506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.005522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.005724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.005740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.005864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.005880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.006098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.006114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.006311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.006328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.006615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.006631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.006769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.006785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.006994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.007009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.007215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.007237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.007432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.007448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.007656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.007672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.007941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.007957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.008194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.008209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.269 [2024-07-15 23:53:17.008378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.269 [2024-07-15 23:53:17.008410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.269 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.008568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.008581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.008776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.008787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.008987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.008999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.009301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.009314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.009396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.009414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.009675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.009687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.009940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.009952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.010194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.010207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.010402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.010415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.010620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.010632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.010735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.010747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.010877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.010889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.011169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.011180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.011327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.011340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.011527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.011539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.011791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.011803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.012031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.012043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.012275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.012288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.012488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.012501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.012811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.012824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.013094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.013106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.013231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.013244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.013392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.013404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.013589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.013601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.013715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.013728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.013864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.013876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.014109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.014121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.014306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.014318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.014543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.014556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.014692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.014704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.014921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.014933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.015069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.015082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.015240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.015252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.015440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.015452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.015634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.015646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.015857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.015869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.016054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.016078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.270 [2024-07-15 23:53:17.016263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.270 [2024-07-15 23:53:17.016276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.270 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.016476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.016489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.016616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.016629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.016818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.016831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.017104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.017117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.017240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.017252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.017532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.017544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.017821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.017835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.018113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.018125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.018392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.018405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.018545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.018557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.018744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.018757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.019030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.019042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.019192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.019204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.019348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.019360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.019492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.019504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.019722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.019734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.019918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.019930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.020110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.020123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.020331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.020344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.020538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.020551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.020828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.020840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.020971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.020983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.021179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.021192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.021361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.021374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.021595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.021607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.021832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.021845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.021959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.021971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.022228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.022241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.022393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.022405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.022525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.022538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.022764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.022776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.022915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.022928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.023197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.023209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.023307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.023320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.023597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.023609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.023816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.023829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.023953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.023964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.024065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.024077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.024278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.024291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.024546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.024558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.271 qpair failed and we were unable to recover it. 00:27:28.271 [2024-07-15 23:53:17.024711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.271 [2024-07-15 23:53:17.024723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.024876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.024887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.025161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.025174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.025437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.025450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.025634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.025646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.025833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.025846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.026070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.026086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.026336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.026348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.026545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.026557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.026679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.026691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.026852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.026864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.026994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.027007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.027242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.027254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.027511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.027523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.027729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.027742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.027884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.027896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.028027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.028038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.028171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.028185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.028330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.028343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.028531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.028544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.028679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.028693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.028915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.028928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.029064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.029077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.029323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.029337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.029463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.029476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.029568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.029580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.029716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.029728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.029860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.029872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.030071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.030083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.030357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.030369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.030623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.030634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.030747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.030759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.030896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.030909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.031101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.031113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.272 [2024-07-15 23:53:17.031247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.272 [2024-07-15 23:53:17.031260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.272 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.031534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.031545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.031675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.031687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.031854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.031866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.032050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.032063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.032404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.032436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.032770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.032782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.033006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.033017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.033331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.033344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.033617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.033629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.033777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.033789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.033992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.034004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.034257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.034271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.034469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.034481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.034668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.034680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.034873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.034885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.035000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.035014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.035168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.035181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.035318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.035331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.035525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.035537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.035678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.035690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.035910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.035922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.036114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.036127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.036259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.036272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.036576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.036589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.036795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.036808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.037011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.037023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.037299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.037312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.037515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.037527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.037686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.037698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.037924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.037937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.038052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.038064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.038194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.038206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.038433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.038444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.038645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.038657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.038730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.038742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.038948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.038960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.039178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.039189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.039462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.039475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.039664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.039676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.039882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.039894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.273 qpair failed and we were unable to recover it. 00:27:28.273 [2024-07-15 23:53:17.040097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.273 [2024-07-15 23:53:17.040109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.040302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.040314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.040506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.040518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.040653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.040665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.040937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.040950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.041235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.041247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.041369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.041381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.041631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.041643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.041833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.041845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.042052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.042064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.042263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.042276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.042551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.042565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.042778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.042790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.043037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.043049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.043304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.043316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.043508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.043520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.043709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.043721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.043838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.043849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.044049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.044061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.044188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.044200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.044276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.044289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.044486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.044498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.044714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.044726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.044988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.045000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.045258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.045270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.045524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.045535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.045672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.045684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.045883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.045895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.046094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.046106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.046300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.046312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.046499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.046512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.046706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.046718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.046840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.046852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.047048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.047060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.047191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.047203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.047338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.047351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.047503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.047515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.047696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.047709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.047891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.047904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.048083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.048095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.274 qpair failed and we were unable to recover it. 00:27:28.274 [2024-07-15 23:53:17.048237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.274 [2024-07-15 23:53:17.048249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.048526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.048538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.048671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.048683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.048930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.048942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.049079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.049092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.049371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.049384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.049577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.049589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.049772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.049784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.049978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.049990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.050108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.050119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.050313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.050325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.050575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.050589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.050767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.050779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.050962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.050974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.051260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.051273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.051480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.051492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.051678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.051690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.051820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.051832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.051974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.051987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.052267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.052282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.052417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.052429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.052617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.052628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.052756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.052768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.053049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.053060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.053200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.053213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.053361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.053373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.053492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.053504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.053711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.053723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.053939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.053951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.054134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.054146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.054344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.054356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.054501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.054513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.054759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.054772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.054952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.054964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.055172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.055184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.055384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.055396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.055659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.055670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.055856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.055868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.056127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.056138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.056335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.056347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.056546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.275 [2024-07-15 23:53:17.056559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.275 qpair failed and we were unable to recover it. 00:27:28.275 [2024-07-15 23:53:17.056758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.056770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.056961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.057106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.057118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.057310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.057323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.057456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.057469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.057615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.057627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.057754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.057766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.057953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.057965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.058160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.058171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.058353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.058366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.058550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.058562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.058765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.058777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.058921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.058933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.059048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.059060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.059195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.059207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.059327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.059340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.059524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.059535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.059735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.059747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.059968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.059979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.060111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.060123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.060267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.060279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.060488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.060501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.060699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.060711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.060858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.060869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.061004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.061016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.061212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.061226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.061308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.061322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.061592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.061603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.061889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.061901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.062030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.062043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.062159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.062170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.062300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.062312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.062568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.062580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.062761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.062773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.276 qpair failed and we were unable to recover it. 00:27:28.276 [2024-07-15 23:53:17.062956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.276 [2024-07-15 23:53:17.062968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.063101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.063112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.063360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.063372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.063499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.063513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.063706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.063718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.063896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.063909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.064118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.064130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.064271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.064284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.064422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.064434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.064632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.064644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.064784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.064796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.065049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.065061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.065248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.065261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.065479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.065490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.065624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.065637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.065910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.065922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.066174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.066186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.066329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.066342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.066544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.066557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.066746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.066758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.066888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.066900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.067011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.067024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.067207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.067219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.067425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.067436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.067632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.067644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.067826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.067838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.068045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.068057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.068249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.068262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.068446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.068458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.068653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.068664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.068870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.068883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.069069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.069081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.069284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.069296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.069528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.069540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.069746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.069758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.069944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.069956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.070140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.070152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.070349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.070361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.070493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.070505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.070628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.070639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.277 [2024-07-15 23:53:17.070885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.277 [2024-07-15 23:53:17.070899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.277 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.071080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.071092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.071228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.071241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.071428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.071443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.071566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.071579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.071800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.071813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.071960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.071973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.072096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.072109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.072247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.072260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.072395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.072408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.072536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.072548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.072845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.072859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.072973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.072986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.073181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.073194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.073392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.073404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.073531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.073543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.073789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.073802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.073986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.073999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.074134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.074146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.074312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.074325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.074507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.074520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.074701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.074713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.074907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.074920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.075195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.075207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.075263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.278 [2024-07-15 23:53:17.075377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.075389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.075536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.075549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.075818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.075831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.076088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.076100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.076308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.076320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.076451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.076463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.076666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.076678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.076784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.076796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.076991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.077004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.077201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.077213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.077332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.077345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.077540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.077553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.077743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.077755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.077959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.278 [2024-07-15 23:53:17.077971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.278 qpair failed and we were unable to recover it. 00:27:28.278 [2024-07-15 23:53:17.078228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.078241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.078374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.078387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.078604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.078617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.078744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.078756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.078949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.078962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.079092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.079105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.079305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.079318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.079512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.079523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.079684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.079696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.079899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.079911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.080113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.080126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.080419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.080432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.080573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.080586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.080719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.080733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.080926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.080939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.081120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.081133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.081387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.081401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.081548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.081561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.081760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.081775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.081910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.081924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.082175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.082188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.082324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.082336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.082483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.082496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.082597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.082610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.082804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.082816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.082933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.082946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.083146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.083159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.083348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.083362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.083558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.083571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.083751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.083764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.083951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.083964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.084239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.084253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.084370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.084383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.084561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.084574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.084665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.084679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.084882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.084895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.085169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.085183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.085322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.085334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.085518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.085532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.085674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.085686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.279 qpair failed and we were unable to recover it. 00:27:28.279 [2024-07-15 23:53:17.085958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.279 [2024-07-15 23:53:17.085970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.086202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.086214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.086413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.086425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.086693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.086705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.086993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.087005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.087289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.087303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.087566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.087578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.087725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.087737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.087927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.087939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.088117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.088129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.088269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.088282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.088435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.088448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.088633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.088645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.088779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.088791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.089067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.089085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.089358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.089370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.089558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.089571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.089761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.089774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.089985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.089999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.090265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.090278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.090570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.090583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.090848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.090861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.091083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.091096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.091284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.091297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.091436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.091448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.091650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.091663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.091860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.091873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.092018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.092030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.092227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.092240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.092509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.092522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.092726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.092738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.092920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.092933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.093138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.093150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.093385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.093397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.093525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.093537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.093720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.093732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.093879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.093892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.094138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.094150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.094297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.094310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.280 [2024-07-15 23:53:17.094502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.280 [2024-07-15 23:53:17.094513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.280 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.094718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.094730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.094930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.094943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.095217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.095233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.095451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.095463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.095639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.095652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.095790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.095803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.095986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.096000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.096195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.096208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.096337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.096350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.096597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.096610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.096749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.096761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.096942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.096954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.097136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.097148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.097369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.097382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.097527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.097540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.097810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.097822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.098035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.098048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.098243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.098255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.098389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.098404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.098584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.098596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.098843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.098855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.098998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.099011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.099173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.099185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.099379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.099392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.099520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.099533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.099677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.099690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.099901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.099913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.100043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.100057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.100242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.100255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.100475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.100487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.100678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.100690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.100832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.100844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.281 qpair failed and we were unable to recover it. 00:27:28.281 [2024-07-15 23:53:17.101094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.281 [2024-07-15 23:53:17.101106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.101243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.101256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.101439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.101451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.101636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.101648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.101836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.101849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.101973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.101985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.102132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.102144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.102282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.102295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.102512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.102525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.102708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.102720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.102903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.102916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.103050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.103062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.103208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.103220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.103481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.103494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.103741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.103753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.104025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.104038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.104222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.104238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.104420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.104432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.104625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.104637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.104897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.104909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.105113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.105125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.105314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.105328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.105509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.105521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.105707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.105719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.105989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.106001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.106248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.106260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.106483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.106500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.106700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.106713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.106913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.106926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.107191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.107204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.107358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.107371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.107506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.107519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.107778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.107791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.108001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.108013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.108209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.108221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.108341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.108354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.108499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.108512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.108758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.108770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.108930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.108942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.109142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.109154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.109337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.282 [2024-07-15 23:53:17.109350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.282 qpair failed and we were unable to recover it. 00:27:28.282 [2024-07-15 23:53:17.109545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.109557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.109683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.109696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.109897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.109909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.110094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.110107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.110244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.110256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.110503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.110516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.110760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.110773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.110978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.110991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.111268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.111288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.111489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.111505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.111639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.111654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.111783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.111797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.111995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.112010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.112295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.112310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.112519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.112531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.112751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.112764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.112900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.112912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.113043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.113055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.113239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.113252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.113514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.113716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.113731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.114006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.114023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.114163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.114178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.114377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.114393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.114587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.114601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.114873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.114890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.115092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.115105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.115305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.115320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.115504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.115518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.115739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.115753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.115946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.115959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.116092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.116106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.116293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.116306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.116580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.116595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.116789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.116802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.116918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.116931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.117202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.117215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.117425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.117437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.117645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.117659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.117750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.283 [2024-07-15 23:53:17.117762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.283 qpair failed and we were unable to recover it. 00:27:28.283 [2024-07-15 23:53:17.117953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.117967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.118175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.118189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.118331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.118345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.118552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.118565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.118768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.118782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.118986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.119000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.119136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.119148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.119343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.119355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.119542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.119557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.119682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.119696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.119889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.119902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.120104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.120117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.120430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.120468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.120666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.120683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.120891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.120908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.121109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.121125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.121316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.121331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.121521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.121537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.121688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.121703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.121909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.121925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.122046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.122062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.122270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.122287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.122493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.122509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.122658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.122674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.122896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.122911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.123110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.123126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.123283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.123300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.123435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.123451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.123637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.123653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.123854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.123870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.124071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.124087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.124248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.124264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.124454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.124470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.124673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.124689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.124894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.124909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.125041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.125057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.125280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.125298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.125518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.125534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.125679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.125695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.125824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.125842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.284 qpair failed and we were unable to recover it. 00:27:28.284 [2024-07-15 23:53:17.126094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.284 [2024-07-15 23:53:17.126111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.126268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.126285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.126542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.126557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.126769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.126784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.126985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.127001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.127138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.127153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.127300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.127317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.127543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.127558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.127749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.127764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.127974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.127989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.128177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.128192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.128397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.128412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.128621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.128637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.128885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.128900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.129047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.129062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.129194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.129208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.129307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.129324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.129604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.129619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.129872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.129887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.130098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.130114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.130338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.130354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.130592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.130607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.130803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.130819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.131025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.131039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.131294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.131310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.131603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.131618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.131887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.131905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.132123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.132138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.132285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.132300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.132446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.132461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.132717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.285 [2024-07-15 23:53:17.132732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.285 qpair failed and we were unable to recover it. 00:27:28.285 [2024-07-15 23:53:17.132918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.132933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.133084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.133099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.133237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.133253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.133443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.133458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.133738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.133753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.133902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.133917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.134110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.134125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.134381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.134397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.134526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.134541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.134692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.134707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.134905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.134920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.135198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.135213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.135428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.135443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.135647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.135662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.135862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.135878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.136198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.136214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.136412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.136427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.136639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.136653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.136840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.136855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.136980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.136995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.137202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.137218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.137360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.137376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.137496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.137514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.137808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.137823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.137960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.137975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.138198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.138213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.138369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.138385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.138572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.138587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.138810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.138824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.139075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.139090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.139292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.139307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.139430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.139445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.139582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.139597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.139849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.139864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.139947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.139961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.140151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.140166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.140352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.140391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.140604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.140618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.140874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.140886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.141083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.141095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.141292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.286 [2024-07-15 23:53:17.141304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.286 qpair failed and we were unable to recover it. 00:27:28.286 [2024-07-15 23:53:17.141497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.141509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.141603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.141614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.141734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.141746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.141945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.141957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.142149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.142161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.142348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.142361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.142560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.142571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.142690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.142702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.142843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.142857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.143059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.143072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.143283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.143294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.143417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.143429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.143623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.143635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.143775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.143786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.143918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.143930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.144203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.144215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.144359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.144371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.144485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.144497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.144705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.144717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.144915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.144926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.145041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.145053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.145304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.145317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.145451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.145463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.145721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.145734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.145937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.145948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.146147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.146160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.146286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.146298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.146437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.146449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.146516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.146534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.146809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.146822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.146962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.146974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.147119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.147132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.147259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.147272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.147456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.147468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.147654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.147668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.287 qpair failed and we were unable to recover it. 00:27:28.287 [2024-07-15 23:53:17.147879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.287 [2024-07-15 23:53:17.147892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.148010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.148022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.148207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.148219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.148345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.148358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.148503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.148517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.148765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.148781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.148950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.148963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.149036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.149048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.149195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.149208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.149466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.149479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.149680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.149693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.149800] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.288 [2024-07-15 23:53:17.149827] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.288 [2024-07-15 23:53:17.149834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.288 [2024-07-15 23:53:17.149841] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.288 [2024-07-15 23:53:17.149847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.288 [2024-07-15 23:53:17.149906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.149920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.149955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:28.288 [2024-07-15 23:53:17.150060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:28.288 [2024-07-15 23:53:17.150165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.150164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:28.288 [2024-07-15 23:53:17.150177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.150165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:28.288 [2024-07-15 23:53:17.150379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.150391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.150640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.150652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.150920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.150931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.151181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.151193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.151423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.151436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.151634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.151646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.151908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.151922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.152148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.152162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.152411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.152424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.152681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.152692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.152838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.152850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.153032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.153046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.153317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.153330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.153477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.153489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.153702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.153716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.153918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.153931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.154131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.154143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.154278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.154292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.154438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.154450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.154650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.154662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.154791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.154803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.154984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.154997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.155297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.155309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.288 [2024-07-15 23:53:17.155402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.288 [2024-07-15 23:53:17.155414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.288 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.155547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.155560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.155771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.155784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.155980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.155992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.156245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.156259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.156501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.156514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.156697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.156710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.156864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.156879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.157126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.157138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.157355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.157367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.157566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.157578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.157698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.157710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.157907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.157920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.158136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.158149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.158352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.158368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.158617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.158630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.158876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.158890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.159036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.159048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.159239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.159468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.159481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.159681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.159693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.159935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.159948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.160167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.160357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.160371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.160645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.160658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.160856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.160868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.161154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.161167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.161371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.161384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.161664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.161680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.161984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.161997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.162262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.162277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.162479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.162491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.162702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.162716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.162838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.162850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.162984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.289 [2024-07-15 23:53:17.162996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.289 qpair failed and we were unable to recover it. 00:27:28.289 [2024-07-15 23:53:17.163130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.163142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.163444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.163457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.163580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.163591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.163732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.163744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.163965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.163978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.164241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.164254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.164389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.164401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.164649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.164663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.164895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.164908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.165094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.165107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.165291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.165305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.165447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.165459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.165603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.165616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.165862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.165875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.166089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.166102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.166306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.166319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.166444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.166457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.166654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.166668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.166819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.166833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.167044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.167062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.167240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.167255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.167464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.167478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.167692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.167704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.167859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.167873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.168107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.168120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.168367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.168380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.168593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.168605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.168921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.168934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.169217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.169234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.169364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.169377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.169624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.169636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.290 qpair failed and we were unable to recover it. 00:27:28.290 [2024-07-15 23:53:17.169774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.290 [2024-07-15 23:53:17.169786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.170096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.170110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.170419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.170432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.170633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.170646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.170855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.170868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.171113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.171126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.171422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.171436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.171688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.171700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.171815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.171827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.172024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.172036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.172286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.172300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.172499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.172512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.172713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.172726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.172963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.172975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.173243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.173257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.173456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.173469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.173619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.173631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.173926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.173940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.174123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.174135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.174344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.174357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.174605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.174618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.174768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.174780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.174970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.174983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.175172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.175185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.175376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.175388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.175525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.175537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.175728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.175740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.176046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.176058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.176327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.176342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.176458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.176470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.176614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.176627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.176809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.176821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.177092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.177105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.177286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.177298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.177435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.177447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.177635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.291 [2024-07-15 23:53:17.177648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.291 qpair failed and we were unable to recover it. 00:27:28.291 [2024-07-15 23:53:17.177838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.177850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.178095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.178107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.178317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.178329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.178599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.178612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.178831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.178845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.179114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.179127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.179385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.179398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.179583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.179595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.179795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.179808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.179934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.179946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.180195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.180208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.180415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.180427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.180687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.180699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.180915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.180927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.181118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.181131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.181375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.181387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.181656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.181669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.181864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.181878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.182165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.182177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.182453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.182466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.182735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.182747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.182970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.182982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.183258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.183272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.183472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.183485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.183684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.183697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.183944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.183956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.184174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.184187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.184471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.184484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.184749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.184762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.184967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.184980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.185274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.185286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.185484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.185497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.185726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.185740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.185987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.185999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.186300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.186312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.186450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.186462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.186735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.186747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.187011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.187023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.187242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.187254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.292 [2024-07-15 23:53:17.187508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.292 [2024-07-15 23:53:17.187520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.292 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.187699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.187711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.187972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.187984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.188198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.188209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.188362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.188375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.188556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.188568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.188714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.188725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.188908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.188920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.189115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.189127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.189390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.189402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.189530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.189542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.189811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.189823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.190089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.190102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.190335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.190347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.190547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.190559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.190806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.190818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.191003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.191015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.191261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.191273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.191492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.191503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.191655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.191667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.191886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.191898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.192171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.192183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.192337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.192360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.192509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.192521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.192796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.192809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.192990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.193002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.193276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.193289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.193537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.193549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.193818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.193830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.194077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.194089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.194356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.194368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.194570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.194583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.194768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.194779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.195044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.195058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.195250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.195263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.195380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.195392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.195638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.195650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.195845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.195857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.196102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.293 [2024-07-15 23:53:17.196115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.293 qpair failed and we were unable to recover it. 00:27:28.293 [2024-07-15 23:53:17.196322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.196334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.196582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.196594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.196884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.196897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.197151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.197164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.197344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.197357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.197486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.197498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.197816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.197828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.198117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.198129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.198415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.198685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.198697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.198826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.198839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.199034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.199046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.199324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.199336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.199540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.199552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.199827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.199839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.200111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.200122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.200398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.200410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.200599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.200611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.200825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.200838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.201114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.201125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.201348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.201361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.201564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.201576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.201784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.201797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.202026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.202038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.202358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.202371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.202565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.202577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.202823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.202836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.203081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.203093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.203387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.203400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.203654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.203666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.203938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.203950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.204064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.204076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.204353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.204366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.204619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.204631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.204903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.204917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.205163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.205176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.205357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.205370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.294 [2024-07-15 23:53:17.205593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.294 [2024-07-15 23:53:17.205607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.294 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.205809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.205821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.206030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.206042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.206315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.206327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.206626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.206639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.206905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.206917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.207175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.207186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.207413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.207425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.207676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.207689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.207829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.207841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.208090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.208102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.208330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.208342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.208537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.208549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.208806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.208818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.209111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.209123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.209395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.209408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.209611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.209622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.209809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.209821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.209953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.209965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.210210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.210222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.210475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.210487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.210752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.210764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.211032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.211044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.211292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.211304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.211496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.211507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.211752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.211764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.211985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.211997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.212179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.212191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.295 [2024-07-15 23:53:17.212384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.295 [2024-07-15 23:53:17.212396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.295 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.212590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.212601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.212798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.212810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.213026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.213038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.213242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.213255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.213527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.213538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.213794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.213806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.214084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.214096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.214324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.214336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.214617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.214633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.214832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.214844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.215040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.215052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.215233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.215246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.215515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.215527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.215723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.215735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.215947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.215960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.216179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.216191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.216438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.216450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.216599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.216611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.216803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.216815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.216995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.217007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.217215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.217230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.217485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.217497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.217684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.217696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.217968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.217980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.218250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.218261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.218478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.218489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.218673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.218685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.218933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.218944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.219195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.219207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.219400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.219413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.219559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.219571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.219845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.219857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.220055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.220067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.575 qpair failed and we were unable to recover it. 00:27:28.575 [2024-07-15 23:53:17.220204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.575 [2024-07-15 23:53:17.220216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.220420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.220433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.220727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.220739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.220932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.220944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.221189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.221200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.221472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.221485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.221735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.221747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.222037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.222049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.222238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.222249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.222409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.222420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.222605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.222617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.222889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.222901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.223170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.223181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.223375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.223388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.223672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.223684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.223936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.223949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.224199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.224211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.224432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.224444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.224726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.224738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.224988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.225000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.225194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.225206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.225391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.225403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.225670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.225681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.225896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.225908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.226155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.226167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.226457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.226469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.226667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.226679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.226860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.226871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.227071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.227082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.227299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.227311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.227562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.227573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.227768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.227779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.227985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.227997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.228181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.228192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.576 [2024-07-15 23:53:17.228481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.576 [2024-07-15 23:53:17.228493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.576 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.228688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.228700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.228995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.229007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.229155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.229167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.229473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.229486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.229744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.229756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.229983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.229995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.230241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.230253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.230536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.230548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.230820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.230832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.231028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.231040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.231247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.231259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.231526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.231538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.231786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.231798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.232014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.232026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.232322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.232334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.232585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.232597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.232871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.232883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.233127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.233139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.233335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.233346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.233617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.233629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.233892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.233906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.234150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.234162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.234433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.234445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.234696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.234708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.234970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.234982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.235241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.235253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.235508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.235519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.235767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.235778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.236039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.236051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.236289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.236301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.236570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.236582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.236837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.236848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.237069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.237080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.237300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.237311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.237580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.237591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.237732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.237745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.237990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.238001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.238275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.577 [2024-07-15 23:53:17.238287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.577 qpair failed and we were unable to recover it. 00:27:28.577 [2024-07-15 23:53:17.238522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.238533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.238787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.238799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.238942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.238954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.239204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.239216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.239462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.239500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.239822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.239849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.240057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.240073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.240340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.240356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.240609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.240624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.240942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.240961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.241243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.241258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.241476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.241491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.241714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.241729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.242006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.242021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.242289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.242305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.242580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.242595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.242819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.242834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.243125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.243140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.243439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.243455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.243727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.243742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.243941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.243956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.244086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.244102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.244320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.244336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.244622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.244637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.244898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.244912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.245119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.245135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.245390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.245406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.245534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.245549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.245830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.245846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.246037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.246051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.246328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.246345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.246581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.246596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.246826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.246841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.247031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.247046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.247301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.247317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.247588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.247603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.247883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.247901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.248147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.248162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.248418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.248435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.248578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.248593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.248799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.578 [2024-07-15 23:53:17.248814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.578 qpair failed and we were unable to recover it. 00:27:28.578 [2024-07-15 23:53:17.249018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.249033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.249279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.249296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.249428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.249443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.249658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.249673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.249884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.249899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.250173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.250188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.250464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.250480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.250731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.250747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.250887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.250902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.251038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.251053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.251312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.251328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.251533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.251548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.251764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.251780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.252009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.252023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.252296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.252313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.252566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.252581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.252773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.252789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.253111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.253126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.253331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.253348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.253543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.253558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.253809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.253823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.254137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.254153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.254431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.254450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.254738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.254753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.254984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.254999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.255280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.255296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.255557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.255572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.255874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.255890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.256167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.256182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.256398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.256415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.256684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.256699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.579 [2024-07-15 23:53:17.256912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.579 [2024-07-15 23:53:17.256927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.579 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.257208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.257223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.257360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.257376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.257563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.257578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.257765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.257780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.258073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.258088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.258370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.258385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.258665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.258681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.258904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.258920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.259173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.259188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.259442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.259458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.259714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.259729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.259917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.259932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.260212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.260232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.260443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.260458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.260608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.260624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.260894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.260909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.261046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.261061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.261341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.261362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.261643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.261659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.261857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.261872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.262062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.262077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.262355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.262370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.262507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.262523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.262720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.262735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.262968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.262983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.263102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.263117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.263398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.263413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.263631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.263646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.263896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.263912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.264165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.264181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.264403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.264419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.264613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.264642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.264842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.264855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.265129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.265140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.265322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.265335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.580 [2024-07-15 23:53:17.265606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.580 [2024-07-15 23:53:17.265617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.580 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.265811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.265823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.266017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.266028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.266233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.266245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.266448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.266459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.266655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.266667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.266947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.266959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.267199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.267210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.267486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.267498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.267749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.267764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.267960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.267972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.268272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.268284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.268550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.268562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.268815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.268826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.268961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.268973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.269160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.269171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.269394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.269405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.269650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.269662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.269952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.269963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.270234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.270246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.270382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.270394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.270575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.270587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.270799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.270812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.271092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.271103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.271232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.271244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.271544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.271555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.271807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.271819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.272022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.272034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.272230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.272243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.272427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.272439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.272711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.272723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.272923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.272935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.273211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.273222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.273494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.273506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.273702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.273713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.273930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.273942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.274125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.274137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.274386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.274398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.274635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.274646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.274767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.274778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.274963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.581 [2024-07-15 23:53:17.274975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.581 qpair failed and we were unable to recover it. 00:27:28.581 [2024-07-15 23:53:17.275169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.275180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.275371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.275383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.275608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.275620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.275847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.275858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.275992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.276004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.276249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.276261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.276465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.276476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.276725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.276736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.276917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.276931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.277202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.277214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.277517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.277528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.277751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.277766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.277965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.277975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.278167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.278177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.278435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.278445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.278644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.278655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.278950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.278959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.279114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.279124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.279316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.279327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.279454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.279463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.279711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.279721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.279902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.279911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.280117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.280127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.280335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.280345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.280593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.280603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.280798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.280808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.280991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.281002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.281190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.281199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.281392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.281403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.281604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.281614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.281896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.281906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.282123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.282133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.282405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.282415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.282596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.282606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.282901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.282911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.283093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.283103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.283286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.283296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.283569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.283579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.283831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.283842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.284032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.284041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.582 [2024-07-15 23:53:17.284221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.582 [2024-07-15 23:53:17.284235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.582 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.284506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.284515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.284810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.284819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.285100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.285109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.285359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.285369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.285640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.285650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.285899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.285908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.286086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.286095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.286358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.286370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.286641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.286651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.286837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.286848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.287029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.287039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.287240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.287250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.287518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.287528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.287715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.287725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.287956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.287966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.288174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.288183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.288434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.288444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.288571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.288581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.288854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.288864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.289106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.289115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.289369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.289379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.289567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.289577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.289767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.289776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.289930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.289940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.290071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.290081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.290349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.290359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.290558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.290568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.290840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.290849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.291044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.291054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.291266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.291276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.291521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.291531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.291752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.291762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.292052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.292062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.292196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.292205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.292419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.292429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.292672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.292681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.292880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.292889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.293166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.293176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.293427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.293437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.293679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.583 [2024-07-15 23:53:17.293690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.583 qpair failed and we were unable to recover it. 00:27:28.583 [2024-07-15 23:53:17.293879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.293891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.294091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.294102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.294295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.294306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.294499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.294509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.294770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.294782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.295032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.295042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.295316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.295326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.295596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.295608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.295825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.295834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.296111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.296120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.296439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.296449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.296588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.296597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.296843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.296853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.297050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.297060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.297328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.297338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.297639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.297648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.297905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.297915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.298166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.298176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.298458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.298469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.298754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.298764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.298984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.298993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.299242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.299252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.299426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.299435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.299561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.299571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.299774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.299783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.299928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.299938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.300119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.300128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.300328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.300338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.300586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.300596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.300789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.300798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.301005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.301014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.301196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.301205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.301419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.584 [2024-07-15 23:53:17.301429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.584 qpair failed and we were unable to recover it. 00:27:28.584 [2024-07-15 23:53:17.301621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.301630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.301810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.301820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.302036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.302045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.302321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.302332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.302608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.302619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.302896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.302907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.303091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.303101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.303398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.303407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.303675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.303685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.303888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.303898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.304163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.304172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.304447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.304457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.304659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.304669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.304959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.304969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.305231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.305241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.305492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.305502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.305784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.305794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.306007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.306017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.306275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.306285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.306493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.306504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.306770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.306780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.307011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.307020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.307156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.307166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.307416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.307426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.307625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.307634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.307921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.307931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.308116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.308126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.308318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.308329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.308542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.308552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.308672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.308682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.308946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.308956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.309180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.309190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.309442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.309453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.309663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.309672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.309922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.309932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.310126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.310135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.310344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.310354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.310603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.310613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.310816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.310826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.311024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.311035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.585 [2024-07-15 23:53:17.311334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.585 [2024-07-15 23:53:17.311345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.585 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.311535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.311546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.311740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.311750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.311968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.311978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.312174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.312184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.312431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.312442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.312709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.312719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.312981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.312991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.313270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.313281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.313417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.313427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.313703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.313712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.313902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.313911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.314149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.314159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.314403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.314413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.314676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.314686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.314906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.314916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.315238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.315248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.315447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.315456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.315707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.315718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.316014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.316024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.316149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.316159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.316302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.316312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.316582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.316592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.316779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.316789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.317039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.317049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.317242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.317252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.317468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.317477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.317722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.317732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.317852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.317862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.318079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.318088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.318287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.318297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.318508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.318517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.318645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.318655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.318913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.318923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.319206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.319216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.319399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.319410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.319558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.319568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.319793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.319803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.320073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.320083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.320333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.320343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.320541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.320550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.586 qpair failed and we were unable to recover it. 00:27:28.586 [2024-07-15 23:53:17.320829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.586 [2024-07-15 23:53:17.320841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.320978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.320987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.321124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.321134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.321274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.321284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.321578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.321587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.321738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.321748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.321934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.321944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.322193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.322203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.322497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.322507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.322777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.322786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.323060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.323069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.323323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.323333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.323528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.323537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.323762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.323772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.323967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.323976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.324173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.324183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.324469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.324479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.324670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.324679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.324809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.324818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.325007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.325016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.325261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.325272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.325527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.325537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.325763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.325772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.326045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.326055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.326235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.326245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.326428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.326437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.326660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.326669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.326893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.326903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.327148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.327158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.327377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.327386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.327587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.327597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.327728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.327738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.327935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.327945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.328140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.328149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.328422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.328433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.328694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.328704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.328923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.328933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.329188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.329197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.329488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.329498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.329693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.329702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.329906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.587 [2024-07-15 23:53:17.329918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.587 qpair failed and we were unable to recover it. 00:27:28.587 [2024-07-15 23:53:17.330108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.330118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.330314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.330325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.330613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.330623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.330842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.330852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.331124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.331134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.331382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.331392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.331647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.331657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.331859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.331868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.332124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.332133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.332329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.332339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.332542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.332551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.332731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.332741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.332930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.332939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.333208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.333217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.333365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.333375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.333505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.333515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.333641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.333650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.333840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.333849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.334052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.334062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.334260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.334270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.334454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.334464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.334607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.334617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.334896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.334906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.335069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.335078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.335250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.335259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.335444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.335454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.335657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.335667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.335936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.335945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.336207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.336216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.336468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.336478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.336746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.336756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.336959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.336968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.337149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.337158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.588 qpair failed and we were unable to recover it. 00:27:28.588 [2024-07-15 23:53:17.337373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.588 [2024-07-15 23:53:17.337383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.337587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.337596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.337888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.337897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.338012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.338022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.338272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.338282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.338481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.338490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.338721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.338733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.338931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.338940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.339204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.339214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.339506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.339516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.339780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.339789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.340035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.340045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.340246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.340256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.340502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.340512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.340709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.340719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.340900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.340909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.341040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.341050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.341251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.341261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.341454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.341464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.341729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.341738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.341958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.341968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.342219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.342232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.342494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.342503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.342692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.342702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.342969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.342979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.343252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.343262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.343465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.343475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.343693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.343704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.343894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.343903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.344087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.344097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.344396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.344406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.344625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.344634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.344813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.344823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.345074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.345083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.345162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.345171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.345389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.345399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.345599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.345609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.345738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.345747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.346026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.346036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.346230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.346240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.346375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.589 [2024-07-15 23:53:17.346385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.589 qpair failed and we were unable to recover it. 00:27:28.589 [2024-07-15 23:53:17.346501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.346511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.346622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.346631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.346771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.346780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.346996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.347006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.347216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.347233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.347374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.347386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.347588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.347597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.347793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.347802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.348071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.348081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.348272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.348281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.348472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.348481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.348686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.348696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.348881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.348890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.349136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.349146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.349324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.349335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.349471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.349481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.349684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.349694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.349875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.349885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.350016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.350025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.350274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.350284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.350478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.350488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.350617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.350626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.350806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.350815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.350946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.350956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.351241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.351251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.351511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.351521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.351649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.351658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.351841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.351850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.352050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.352060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.352196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.352205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.352507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.352516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.352695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.352705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.352824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.352834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.352983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.352993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.353106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.353116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.353250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.353260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.353405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.353415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.353554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.353564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.353748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.353758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.353953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.590 [2024-07-15 23:53:17.353963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.590 qpair failed and we were unable to recover it. 00:27:28.590 [2024-07-15 23:53:17.354092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.354102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.354298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.354307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.354501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.354510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.354628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.354638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.354784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.354794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.354888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.354899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.355088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.355098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.355249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.355259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.355391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.355401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.355550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.355559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.355776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.355785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.355926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.355935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.356061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.356071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.356263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.356273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.356469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.356479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.356661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.356671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.356790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.356799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.357064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.357073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.357290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.357300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.357504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.357513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.357692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.357701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.357835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.357845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.358043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.358052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.358251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.358262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.358460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.358469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.358714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.358723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.358845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.358855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.359127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.359137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.359275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.359286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.359406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.359416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.359688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.359697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.359878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.359888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.360027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.360036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.360231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.360240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.360552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.360561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.360710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.360719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.360851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.360861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.360935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.360944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.361121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.361131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.361337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.361347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.591 [2024-07-15 23:53:17.361548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.591 [2024-07-15 23:53:17.361558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.591 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.361756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.361765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.361961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.361971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.362182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.362192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.362371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.362381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.362600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.362611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.362789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.362798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.363015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.363025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.363297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.363307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.363493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.363502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.363680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.363690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.363962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.363971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.364229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.364239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.364381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.364391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.364537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.364547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.364738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.364747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.365021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.365031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.365210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.365219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.365333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.365343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.365613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.365623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.365820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.365831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.365956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.365966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.366098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.366107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.366357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.366366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.366495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.366504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.366704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.366713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.366907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.366917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.367035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.367044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.367248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.367258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.367464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.367473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.367657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.367667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.367867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.367876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.368063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.368073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.368329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.368340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.368519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.368529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.592 qpair failed and we were unable to recover it. 00:27:28.592 [2024-07-15 23:53:17.368727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.592 [2024-07-15 23:53:17.368737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.368931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.368941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.369152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.369161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.369362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.369372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.369513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.369523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.369800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.369810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.370020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.370030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.370229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.370239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.370539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.370549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.370796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.370805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.370935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.370947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.371143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.371152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.371351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.371361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.371609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.371619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.371748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.371759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.371962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.371972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.372183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.372193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.372442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.372452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.372580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.372590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.372811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.372821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.372939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.372948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.373086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.373095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.373236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.373246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.373436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.373445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.373597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.373607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.373822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.373832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.373961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.373971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.374157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.374167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.374297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.374307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.374525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.374535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.374639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.374649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.374832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.374841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.374968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.374978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.375113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.375123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.375310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.375320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.375511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.375521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.375639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.375649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.375823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.375855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.376065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.376081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.593 [2024-07-15 23:53:17.376404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.593 [2024-07-15 23:53:17.376419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.593 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.376697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.376711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.376971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.376985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.377139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.377153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.377376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.377390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.377534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.377547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.377829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.377842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.378028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.378042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.378198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.378211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.378239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e0000 (9): Bad file descriptor 00:27:28.594 [2024-07-15 23:53:17.378533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.378544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.378690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.378700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.378947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.378957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.379074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.379084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.379355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.379365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.379505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.379514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.379718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.379728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.379931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.379941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.380074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.380084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.380329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.380339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.380585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.380595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.380720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.380731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.380950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.380960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.381096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.381105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.381232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.381242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.381447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.381458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.381727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.381736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.381913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.381923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.382121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.382130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.382328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.382338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.382532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.382542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.382673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.382683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.382872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.382881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.383146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.383156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.383335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.383345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.383592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.383602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.383752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.383762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.383893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.383903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.384095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.384104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.594 [2024-07-15 23:53:17.384354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.594 [2024-07-15 23:53:17.384364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.594 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.384497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.384507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.384700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.384709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.384920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.384930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.385112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.385122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.385409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.385419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.385607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.385618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.385810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.385820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.386107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.386117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.386244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.386254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.386480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.386490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.386606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.386616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.386753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.386763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.386903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.386913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.387157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.387166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.387443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.387453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.387650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.387661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.387857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.387866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.388006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.388016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.388127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.388137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.388393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.388403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.388601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.388611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.388739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.388748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.389022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.389032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.389303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.389313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.389557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.389567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.389755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.389766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.389980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.389990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.390259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.390269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.390414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.390424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.390617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.390627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.390872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.390881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.391059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.391069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.391273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.391283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.391461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.391471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.391748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.391758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.391890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.391900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.392168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.392178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.392321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.595 [2024-07-15 23:53:17.392331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.595 qpair failed and we were unable to recover it. 00:27:28.595 [2024-07-15 23:53:17.392520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.392530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.392804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.392813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.392914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.392924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.393123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.393133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.393271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.393281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.393499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.393509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.393731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.393741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.393960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.393970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.394190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.394200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.394322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.394332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.394575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.394585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.394853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.394863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.395133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.395143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.395327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.395337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.395481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.395490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.395669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.395679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.395893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.395903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.396036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.396046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.396227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.396238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.396546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.396556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.396744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.396754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.396894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.396904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.397153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.397162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.397354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.397364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.397547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.397557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.397697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.397707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.397885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.397895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.398167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.398179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.398365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.398375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.398651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.398660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.398776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.398786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.398995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.399005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.399253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.399263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.399511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.399521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.399718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.399728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.399905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.399915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.400044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.400054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.400242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.400252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.400539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.596 [2024-07-15 23:53:17.400549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.596 qpair failed and we were unable to recover it. 00:27:28.596 [2024-07-15 23:53:17.400745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.400755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.400937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.400947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.401169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.401179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.401404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.401414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.401541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.401551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.401687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.401697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.401820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.401829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.402011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.402021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.402210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.402220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.402415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.402426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.402615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.402625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.402765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.402775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.402897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.402907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.403093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.403103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.403293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.403303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.403551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.403561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.403756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.403766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.403880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.403890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.404017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.404026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.404215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.404229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.404497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.404507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.404708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.404717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.404896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.404906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.405155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.405165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.405359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.405369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.405497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.405506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.405694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.405704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.405886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.405896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.406078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.406090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.406269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.406279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.406367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.406376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.597 [2024-07-15 23:53:17.406568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.597 [2024-07-15 23:53:17.406577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.597 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.406753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.406763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.406955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.406965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.407143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.407153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.407291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.407301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.407437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.407447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.407611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.407621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.407866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.407876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.408003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.408013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.408216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.408228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.408361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.408371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.408445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.408455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.408647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.408656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.408779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.408789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.409043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.409053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.409194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.409204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.409330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.409340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.409457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.409467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.409581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.409591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.409770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.409780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.409974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.409984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.410097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.410107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.410255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.410265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.410566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.410576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.410701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.410711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.410930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.410940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.411071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.411080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.411282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.411293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.411570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.411580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.411769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.411779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.412026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.412036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.412165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.412175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.412298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.412308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.412486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.412496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.412699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.412709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.412851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.412861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.412982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.412992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.413215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.598 [2024-07-15 23:53:17.413229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.598 qpair failed and we were unable to recover it. 00:27:28.598 [2024-07-15 23:53:17.413427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.413437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.413710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.413720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.413854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.413864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.414127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.414137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.414332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.414341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.414535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.414546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.414733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.414743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.414921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.414930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.415175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.415185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.415367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.415377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.415508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.415518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.415764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.415774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.415966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.415976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.416139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.416149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.416419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.416429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.416627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.416637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.416883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.416893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.417083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.417093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.417286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.417296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.417436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.417446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.417656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.417666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.417867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.417877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.418065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.418075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.418263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.418273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.418523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.418533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.418783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.418793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.419010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.419020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.419266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.419277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.419405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.419415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.419493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.419503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.419695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.419705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.419885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.419895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.420039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.420049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.420244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.420255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.420445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.420455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.420644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.420653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.420850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.420860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.421038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.421048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.421248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.421259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.421500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.421512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.599 qpair failed and we were unable to recover it. 00:27:28.599 [2024-07-15 23:53:17.421710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.599 [2024-07-15 23:53:17.421720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.421965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.421975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.422193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.422203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.422418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.422428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.422626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.422635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.422762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.422772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.423029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.423038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.423175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.423185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.423379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.423389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.423591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.423601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.423674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.423684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.423950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.423960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.424094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.424104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.424353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.424363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.424484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.424494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.424700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.424710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.424957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.424967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.425161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.425171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.425371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.425381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.425495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.425505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.425578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.425588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.425778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.425788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.426034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.426044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.426244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.426254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.426324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.426334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.426580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.426590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.426733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.426743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.426964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.426974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.427234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.427244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.427458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.427468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.427753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.427763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.427970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.427980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.428120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.428130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.428396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.428406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.428533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.428542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.428718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.428728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.428913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.428923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.429103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.600 [2024-07-15 23:53:17.429113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.600 qpair failed and we were unable to recover it. 00:27:28.600 [2024-07-15 23:53:17.429366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.429375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.429564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.429575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.429826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.429836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.429971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.429981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.430232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.430242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.430422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.430432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.430611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.430621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.430806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.430816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.430949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.430959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.431089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.431098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.431242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.431252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.431432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.431442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.431623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.431633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.431816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.431826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.431968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.431978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.432179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.432188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.432434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.432444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.432662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.432672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.432809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.432819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.433001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.433010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.433136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.433146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.433344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.433355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.433548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.433558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.433754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.433763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.433962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.433972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.434242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.434252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.434460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.434470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.434649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.434659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.434787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.434797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.435040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.435050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.435298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.435308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.435497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.435507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.435633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.435643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.601 [2024-07-15 23:53:17.435823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.601 [2024-07-15 23:53:17.435832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.601 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.436964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.436976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.437091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.437101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.437320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.437330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.437451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.437461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.437704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.437714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.437838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.437848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.438047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.438057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.438163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.438173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.438273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.438284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.438471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.438481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.438741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.438750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.438974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.438984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.439133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.439142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.439276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.439286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.439500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.439510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.439699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.439709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.439868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.439878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.440019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.440028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.440147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.440157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.440385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.440395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.440676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.440686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.440886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.440896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.441024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.441034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.441127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.441137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.441282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.441293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.441418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.441428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.441575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.441584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.441762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.441772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.441971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.441981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.442173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.442183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.442427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.442437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.442637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.442647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.442772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.602 [2024-07-15 23:53:17.442782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.602 qpair failed and we were unable to recover it. 00:27:28.602 [2024-07-15 23:53:17.443053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.443062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.443257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.443267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.443464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.443473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.443740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.443750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.444014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.444024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.444295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.444305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.444419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.444429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.444617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.444628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.444749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.444759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.444872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.444881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.445020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.445030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.445222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.445242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.445453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.445463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.445607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.445616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.445864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.445873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.446090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.446100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.446283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.446293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.446542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.446551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.446745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.446755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.446947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.446957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.447158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.447168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.447350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.447360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.447489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.447500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.447678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.447687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.447870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.447879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.448083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.448092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.448283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.448293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.448544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.448554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.448750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.448759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.448869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.448879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.449126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.449136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.449277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.449287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.449417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.449426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.449700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.449710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.449823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.449833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.450013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.450023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.450208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.450218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.450428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.450438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.450626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.450636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.450824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.450833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.603 qpair failed and we were unable to recover it. 00:27:28.603 [2024-07-15 23:53:17.450975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.603 [2024-07-15 23:53:17.450985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.451190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.451199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.451380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.451390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.451636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.451646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.451833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.451843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.452113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.452123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.452301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.452312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.452488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.452498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.452688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.452698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.452791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.452801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.452916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.452926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.453174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.453184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.453326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.453336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.453516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.453526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.453710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.453720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.453917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.453927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.454106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.454117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.454308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.454318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.454588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.454598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.454741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.454751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.454956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.454966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.455095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.455105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.455249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.455259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.455441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.455451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.455634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.455644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.455837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.455847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.456026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.456036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.456163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.456173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.456332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.456342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.456531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.456541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.456731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.456742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.456925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.456935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.457133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.457142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.457260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.457271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.457515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.457527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.457708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.457718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.457968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.457978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.458191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.458201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.458335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.458345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.458525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.458535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.604 [2024-07-15 23:53:17.458681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.604 [2024-07-15 23:53:17.458691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.604 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.458932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.458942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.459156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.459166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.459356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.459366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.459580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.459590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.459723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.459733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.459862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.459872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.460066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.460076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.460266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.460276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.460523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.460532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.460662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.460672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.460799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.460808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.461066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.461075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.461266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.461277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.461474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.461484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.461669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.461679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.461870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.461881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.462085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.462094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.462284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.462294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.462491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.462501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.462592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.462601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.462784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.462794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.462997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.463007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.463200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.463210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.463488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.463498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.463637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.463647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.463855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.463865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.464126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.464136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.464349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.464360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.464474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.464484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.464728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.464738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.464929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.464939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.465152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.465162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.465350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.465360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.465495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.465507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.465686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.465696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.465941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.465950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.466142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.466152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.466348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.466358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.466542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.466552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.466808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.466818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.605 [2024-07-15 23:53:17.466965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.605 [2024-07-15 23:53:17.466974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.605 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.467105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.467115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.467321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.467331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.467529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.467539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.467841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.467851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.468047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.468057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.468217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.468230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.468414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.468424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.468584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.468594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.468776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.468785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.468926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.468936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.469116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.469126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.469306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.469316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.469582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.469591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.469789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.469799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.470042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.470052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.470168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.470178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.470424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.470434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.470613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.470623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.470798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.470808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.471082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.471092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.471274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.471285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.471496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.471506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.471770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.471780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.472035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.472045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.472245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.472255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.472448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.472458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.472731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.472741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.472966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.472975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.473171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.473180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.473453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.473464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.473661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.473671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.473803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.473813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.474030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.474042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.606 qpair failed and we were unable to recover it. 00:27:28.606 [2024-07-15 23:53:17.474231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.606 [2024-07-15 23:53:17.474241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.474485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.474496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.474691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.474701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.474893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.474903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.475081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.475091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.475269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.475279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.475499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.475509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.475779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.475789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.476042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.476052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.476253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.476264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.476389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.476400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.476598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.476608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.476796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.476806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.476924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.476934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.477144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.477153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.477298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.477309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.477578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.477587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.477779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.477789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.477926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.477935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.478146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.478155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.478295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.478305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.478506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.478516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.478707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.478717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.478960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.478970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.479187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.479197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.479319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.479329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.479579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.479589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.479734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.479744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.479935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.479945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.480142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.480151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.480348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.480358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.480547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.480557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.480740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.480750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.480876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.480886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.481148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.481158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.481289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.481300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.481563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.481764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.481773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.481986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.607 [2024-07-15 23:53:17.481996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.607 qpair failed and we were unable to recover it. 00:27:28.607 [2024-07-15 23:53:17.482250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.482262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.482443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.482453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.482726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.482736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.482927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.482937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.483075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.483085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.483265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.483276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.483465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.483474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.483612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.483622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.483798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.483808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.484014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.484023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.484299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.484309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.484433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.484443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.484662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.484672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.484858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.484868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.485057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.485067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.485268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.485278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.485467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.485477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.485602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.485612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.485766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.485775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.486022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.486032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.486211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.486220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.486494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.486504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.486646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.486656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.486845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.486854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.486996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.487006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.487150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.487160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.487405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.487415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.487664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.487674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.487851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.487861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.488074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.488084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.488289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.488299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.488517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.488527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.488745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.488754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.489022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.489032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.489274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.489284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.489415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.489426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.489700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.489710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.489976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.489986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.490186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.490196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.490340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.608 [2024-07-15 23:53:17.490350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.608 qpair failed and we were unable to recover it. 00:27:28.608 [2024-07-15 23:53:17.490531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.490542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.490655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.490665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.490857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.490867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.491015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.491025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.491297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.491307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.491465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.491475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.491753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.491763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.491978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.491988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.492122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.492132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.492268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.492278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.492456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.492466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.492676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.492686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.492831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.492840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.493051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.493061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.493260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.493270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.493453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.493462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.493649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.493659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.493836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.493846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.494040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.494049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.494238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.494248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.494541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.494551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.494780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.494791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.495056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.495065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.495214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.495227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.495416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.495426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.495641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.495651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.495869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.495879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.496106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.496116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.496251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.496261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.496459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.496469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.496648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.496658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.496848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.496858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.497083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.497092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.497292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.497302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.497433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.497444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.497737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.497747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.497946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.609 [2024-07-15 23:53:17.497956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.609 qpair failed and we were unable to recover it. 00:27:28.609 [2024-07-15 23:53:17.498180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.498190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.498387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.498396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.498611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.498621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.498818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.498829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.499052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.499062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.499194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.499203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.499494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.499504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.499710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.499720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.499840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.499851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.500071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.500081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.500231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.500241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.500518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.500528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.500647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.500657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.500900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.500910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.501155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.501165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.501344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.501354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.501533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.501543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.501733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.501743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.501966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.501976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.502085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.502095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.502365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.502375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.502643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.502653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.502762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.502772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.502914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.502924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.503125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.503135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.503283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.503293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.503371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.503381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.503585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.503595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.503732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.503742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.503935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.503945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.504157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.504167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.504297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.504307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.504575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.610 [2024-07-15 23:53:17.504585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.610 qpair failed and we were unable to recover it. 00:27:28.610 [2024-07-15 23:53:17.504719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.504729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.504974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.504983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.505117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.505128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.505273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.505283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.505415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.505425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.505674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.505684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.505941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.505951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.506085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.506095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.506278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.506288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.506486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.506496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.506710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.506721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.506946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.506956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.507155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.507164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.507304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.507314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.507558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.507568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.507763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.507773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.507880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.507890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.508076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.508086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.508232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.508242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.508369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.508379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.508497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.508507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.508683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.508970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.508980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.509186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.509196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.509389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.509399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.509490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.509500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.509747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.509757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.509945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.509955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.510198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.510208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.510325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.510335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.510458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.510467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.510660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.510670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.510817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.510827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.511051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.511060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.511255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.511265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.511380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.511390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.511659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.511669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.511914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.511924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.512134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.512144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.512330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.512341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.611 qpair failed and we were unable to recover it. 00:27:28.611 [2024-07-15 23:53:17.512537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.611 [2024-07-15 23:53:17.512547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.512798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.512808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.513096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.513106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.513234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.513244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.513443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.513452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.513651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.513661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.513840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.513850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.514105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.514115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.514359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.514369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.514581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.514591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.514711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.514723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.514864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.514874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.515140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.515149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.515355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.515366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.515506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.515516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.515706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.515717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.515968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.515978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.516107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.516116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.516393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.516403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.516583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.516593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.516706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.516716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.516920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.516930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.517122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.517131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.517331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.517341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.517541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.517551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.517726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.517736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.517926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.517936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.518189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.518199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.518446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.518456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.518649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.518659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.518860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.518869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.519046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.519056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.519260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.519271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.519413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.519422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.519633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.519643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.519771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.519781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.519982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.519992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.520193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.520203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.520330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.520340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.520468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.520477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.520687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.612 [2024-07-15 23:53:17.520697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.612 qpair failed and we were unable to recover it. 00:27:28.612 [2024-07-15 23:53:17.520841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.520851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.521040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.521049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.521245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.521255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.521396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.521406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.521609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.521619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.521819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.521829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.522010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.522020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.522211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.522221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.522428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.522438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.522619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.522630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.522831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.522841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.523058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.523067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.523250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.523260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.523355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.523364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.523619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.523629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.523839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.523849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.523993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.524003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.524210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.524220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.524411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.524421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.524611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.524620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.524730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.524740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.524984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.524994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.525129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.525139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.525275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.525285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.525395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.525404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.525544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.525553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.525754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.525764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.525957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.525967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.526081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.526091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.526218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.526231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.526484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.526494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.526759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.526769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.613 [2024-07-15 23:53:17.527017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.613 [2024-07-15 23:53:17.527027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.613 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.527164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.527174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.527316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.527326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.527537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.527547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.527747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.527757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.527939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.527949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.528143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.528152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.528267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.528278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.528502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.528512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.528703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.528713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.528897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.528907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.529161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.529171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.529347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.529358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.529493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.529503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.897 [2024-07-15 23:53:17.529645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.897 [2024-07-15 23:53:17.529654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.897 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.529835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.529845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.529972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.529982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.530118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.530129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.530263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.530273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.530463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.530473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.530768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.530778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.530892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.530902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.531091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.531101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.531290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.531301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.531498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.531508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.531582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.531592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.531784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.531794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.531971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.531981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.532165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.532175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.532309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.532319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.532451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.532461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.532662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.532672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.532865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.532875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.533080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.533090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.533313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.533324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.533537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.533547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.533796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.533806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.533985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.533995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.534187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.534197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.534425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.534436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.534632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.534642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.534889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.534899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.535106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.535116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.535307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.535317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.535442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.535461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.535662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.535676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.535810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.535824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.535928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.535941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.536199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.536213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.536525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.536539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.536739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.536753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.536952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.536966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.537160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.537174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.537378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.537393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.537585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.537598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.898 qpair failed and we were unable to recover it. 00:27:28.898 [2024-07-15 23:53:17.537784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.898 [2024-07-15 23:53:17.537798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.537929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.537943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.538142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.538160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.538414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.538428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.538577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.538590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.538737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.538750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.538984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.538997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.539252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.539265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.539484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.539497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.539753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.539766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.540022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.540035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.540236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.540250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.540445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.540459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.540731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.540745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.540943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.540956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.541144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.541157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.541371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.541385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.541587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.541601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.541802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.541815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.542030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.542043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.542241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.542255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.542523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.542537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.542662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.542676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.542883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.542896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.543103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.543117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.543372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.543386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.543598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.543612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.543817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.543831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.544056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.544069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.544285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.544316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.544612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.544633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.544837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.544849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.545001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.545010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.545199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.545208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.545485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.545495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.545740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.545750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.545984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.545993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.546256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.546266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.546454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.546463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.899 [2024-07-15 23:53:17.546644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.899 [2024-07-15 23:53:17.546654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.899 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.546831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.546841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.547039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.547049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.547245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.547257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.547471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.547481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.547688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.547698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.547840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.547850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.548126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.548136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.548252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.548263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.548395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.548405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.548674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.548684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.548929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.548938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.549164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.549174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.549357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.549367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.549566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.549576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.549796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.549806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.550051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.550061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.550256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.550266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.550461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.550471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.550689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.550699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.550923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.550933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.551071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.551081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.551333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.551343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.551591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.551600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.551792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.551802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.552015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.552024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.552149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.552159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.552360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.552370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.552509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.552519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.552696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.552706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.552823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.552833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.553026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.553036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.553169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.553178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.553424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.553435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.553615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.553625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.553869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.553879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.553971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.900 [2024-07-15 23:53:17.553981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.900 qpair failed and we were unable to recover it. 00:27:28.900 [2024-07-15 23:53:17.554180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.554189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.554476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.554486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.554680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.554690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.554910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.554920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.555141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.555151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.555422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.555432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.555641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.555651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.555776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.555786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.556069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.556078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.556295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.556305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.556498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.556508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.556697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.556708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.556792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.556802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.556935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.556944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.557088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.557098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.557232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.557242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.557368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.557378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.557596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.557605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.557820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.557830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.558015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.558026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.558216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.558229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.558360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.558370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.558486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.558496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.558680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.558690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.558814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.558824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.558954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.558964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.559239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.559250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.559441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.559451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.559704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.559714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.559838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.559848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.560010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.560020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.560266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.560276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.560524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.560534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.560670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.560682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.560879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.560889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.561026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.561036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.561252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.561262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.561401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.561411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.901 [2024-07-15 23:53:17.561554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.901 [2024-07-15 23:53:17.561565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.901 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.561760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.561770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.561955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.561964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.562238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.562248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.562439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.562449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.562628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.562638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.562824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.562834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.563104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.563115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.563261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.563271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.563458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.563468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.563714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.563724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.563983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.563993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.564239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.564249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.564387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.564397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.564663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.564672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.564811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.564821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.565093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.565103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.565252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.565262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.565391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.565401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.565594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.565604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.565736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.565745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.566018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.566028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.566163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.902 [2024-07-15 23:53:17.566173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.902 qpair failed and we were unable to recover it. 00:27:28.902 [2024-07-15 23:53:17.566382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.566392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.566583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.566592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.566805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.566815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.567009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.567018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.567148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.567158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.567343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.567353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.567477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.567487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.567684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.567694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.567938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.567947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.568078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.568088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.568229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.568239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.568436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.568446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.568561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.568572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.568845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.568855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.569128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.569137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.569337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.569348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.569546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.569555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.569754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.569764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.569957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.569967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.570145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.570155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.570448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.570458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.570649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.570659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.570845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.570855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.571048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.571058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.571329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.571339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.571450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.571460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.571593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.571603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.903 [2024-07-15 23:53:17.571847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.903 [2024-07-15 23:53:17.571857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.903 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.572047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.572057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.572194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.572204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.572399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.572409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.572656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.572665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.572934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.572944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.573126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.573136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.573401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.573411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.573525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.573535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.573727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.573737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.573982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.573992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.574168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.574178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.574399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.574409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.574678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.574688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.574823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.574833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.575022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.575032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.575166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.575175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.575388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.575399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.575669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.575679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.575826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.575836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.575981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.575991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.576167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.576176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.576313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.576323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.576453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.576463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.576653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.576663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.576804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.576816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.577082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.577091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.577337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.904 [2024-07-15 23:53:17.577348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.904 qpair failed and we were unable to recover it. 00:27:28.904 [2024-07-15 23:53:17.577490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.577500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.577693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.577703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.577915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.577925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.578201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.578211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.578469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.578479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.578610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.578620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.578818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.578828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.579094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.579104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.579286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.579297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.579501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.579510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.579642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.579652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.579845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.579855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.580034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.580044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.580253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.580263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.580379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.580389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.580522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.580532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.580617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.580627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.580720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.580729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.580931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.580941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.581176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.581186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.581321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.581332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.581530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.581540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.581749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.581759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.581981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.581991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.582266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.582276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.582488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.582499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.582635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.582645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.905 [2024-07-15 23:53:17.582825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.905 [2024-07-15 23:53:17.582835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.905 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.583131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.583141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.583340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.583350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.583491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.583500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.583615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.583625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.583809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.583819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.583938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.583948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.584167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.584177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.584373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.584383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.584570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.584580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.584769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.584783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.584895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.584905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.585118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.585127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.585387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.585398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.585580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.585589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.585738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.585748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.585886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.585896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.586136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.586146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.586349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.586359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.586575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.586585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.586851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.586861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.587055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.587065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.587257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.587267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.587409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.587419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.587667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.587677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.587807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.587818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.588018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.588028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.588149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.588158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.906 qpair failed and we were unable to recover it. 00:27:28.906 [2024-07-15 23:53:17.588337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.906 [2024-07-15 23:53:17.588348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.588565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.588574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.588756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.588766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.588964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.588973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.589114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.589124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.589301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.589312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.589494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.589504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.589718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.589728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.589864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.589874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.590070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.590080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.590265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.590275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.590460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.590470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.590738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.590748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.590926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.590935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.591066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.591076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.591258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.591268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.591458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.591468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.591666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.591675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.591820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.591830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.591979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.591989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.592132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.592142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.592340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.592350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.592537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.592548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.592747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.592757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.593027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.593037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.593219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.593235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.593430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.593439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.593685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.907 [2024-07-15 23:53:17.593694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.907 qpair failed and we were unable to recover it. 00:27:28.907 [2024-07-15 23:53:17.593952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.593962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.594103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.594112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.594317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.594328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.594552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.594562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.594809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.594819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.595090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.595100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.595292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.595302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.595547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.595557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.595743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.595753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.595972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.595982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.596201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.596211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.596419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.596429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.596676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.596686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.596867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.596876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.597145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.597155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.597346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.597356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.597554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.597563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.597858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.597868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.598059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.598069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.598314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.598324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.598633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.598642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-07-15 23:53:17.598740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.908 [2024-07-15 23:53:17.598750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.598879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.598889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.599081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.599091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.599220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.599233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.599494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.599504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.599780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.599789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.599932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.599941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.600085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.600095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.600218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.600237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.600365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.600374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.600555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.600565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.600694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.600704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.600893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.600903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.601120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.601132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.601346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.601356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.601563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.601573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.601701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.601711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.601909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.601919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.602041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.602051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.602251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.602261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.602403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.602412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.602683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.602693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.602892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.602902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.603180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.603190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.603408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.603418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.603659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.603669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.603865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.603874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.603983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.603993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.604237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.604247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.604366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.604376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.604657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.604667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.604859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.604869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.605003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.605013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.605239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.605249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.605439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.605450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.605716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.605726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.605856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.605866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.605989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.605998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.606244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.606254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.606398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.606408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.606608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.909 [2024-07-15 23:53:17.606618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-07-15 23:53:17.606755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.606765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.606889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.606899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.607142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.607152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.607425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.607435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.607689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.607699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.607903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.607913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.608043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.608053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.608298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.608308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.608455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.608466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.608605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.608615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.608858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.608868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.609074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.609084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.609218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.609233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.609312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.609323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.609502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.609512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.609653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.609663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.609846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.609856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.609985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.609994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.610185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.610195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.610418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.610429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.610630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.610639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.610861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.610870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.611003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.611013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.611207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.611216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.611374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.611384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.611653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.611663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.611861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.611871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.612119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.612129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.612410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.612420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.612616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.612625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.612805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.612815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.612946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.612956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.613166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.613176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.613403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.613413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.613635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.613645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.613854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.613864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.614061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.614071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.614283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.614293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.614486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.614496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.910 [2024-07-15 23:53:17.614743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.910 [2024-07-15 23:53:17.614752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.910 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.615002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.615012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.615098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.615109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.615358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.615369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.615499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.615509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.615708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.615718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.615858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.615868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.616071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.616081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.616326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.616336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.616475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.616485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.616665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.616675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.616895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.616905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.617173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.617183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.617327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.617339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.617587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.617597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.617830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.617840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.618033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.618043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.618240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.618250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.618447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.618457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.618584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.618595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.618704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.618715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.618916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.618926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.619123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.619134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.619275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.619285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.619425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.619436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.619627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.619637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.619860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.619870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.620028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.620038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.620216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.620230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.620509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.620521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.620664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.620673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.620783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.620794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.621003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.621014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.621142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.621152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.621242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.621252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.621384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.621394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.621530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.621540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.621808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.621818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.622016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.622026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.622160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.622170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.622440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.622450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.911 qpair failed and we were unable to recover it. 00:27:28.911 [2024-07-15 23:53:17.622592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.911 [2024-07-15 23:53:17.622602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.622795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.622805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.622998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.623009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.623146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.623156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.623341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.623352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.623623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.623634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.623784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.623794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.623898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.623908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.624111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.624123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.624371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.624381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.624573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.624583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.624781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.624792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.625036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.625050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.625235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.625245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.625385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.625395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.625526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.625536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.625728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.625739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.625882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.625892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.626137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.626148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.626267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.626278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.626417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.626428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.626625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.626635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.626819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.626830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.626952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.626962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.627068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.627078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.627293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.627303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.627517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.627527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.627648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.627659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.627790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.627800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.627887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.627897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.628145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.628155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.628341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.628351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.628486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.628496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.628701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.628711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.628910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.912 [2024-07-15 23:53:17.628920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.912 qpair failed and we were unable to recover it. 00:27:28.912 [2024-07-15 23:53:17.629107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.629117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.629306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.629316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.629506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.629516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.629718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.629729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.629870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.629881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.630013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.630023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.630243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.630254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.630520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.630530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.630663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.630672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.630810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.630820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.631083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.631093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.631270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.631280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.631393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.631403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.631561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.631571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.631686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.631697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.631886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.631896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.632028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.632038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.632296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.632309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.632557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.632567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.632715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.632726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.632907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.632918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.633063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.633073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.633147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.633157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.633298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.633309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.633439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.633449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.633586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.633597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.633727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.633738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.633848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.633858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.634037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.634047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.634317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.634328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.634596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.634607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.634721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.634732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.634999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.635010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.635100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.635110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.635240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.635251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.635377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.635388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.635587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.635598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.635797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.913 [2024-07-15 23:53:17.635808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.913 qpair failed and we were unable to recover it. 00:27:28.913 [2024-07-15 23:53:17.635925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.635935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.636129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.636141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.636270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.636281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.636392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.636403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.636489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.636499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.636613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.636624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.636845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.636863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.637062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.637077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.637165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.637179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.637425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.637440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.637640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.637655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.637845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.637860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.637993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.638006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.638154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.638167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.638442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.638457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.638656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.638670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.638815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.638829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.639032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.639047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.639200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.639214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.639420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.639438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.639638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.639652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.639791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.639805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.640017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.640031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.640161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.640174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.640369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.640383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.640516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.640530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.640661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.640674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.640876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.640889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.641092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.641105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.641292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.641306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.641456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.641471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.641604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.641618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.641748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.641761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.641978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.641991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.642110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.642121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.642320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.642331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.642459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.642469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.642602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.642612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.642735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.642745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.642968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.642978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.914 [2024-07-15 23:53:17.643221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.914 [2024-07-15 23:53:17.643235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.914 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.643353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.643364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.643490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.643502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.643622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.643633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.643770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.643781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.643906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.643917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.644112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.644133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.644284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.644299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.644585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.644599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.644761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.644774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.644968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.644982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.645103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.645117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.645370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.645384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.645641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.645655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.645803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.645817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.645936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.645949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.646148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.646161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.646395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.646410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.646546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.646560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.646711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.646728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.646862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.646875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.647034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.647046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.647170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.647181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.647466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.647477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.647669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.647679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.647868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.647879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.648075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.648085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.648217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.648237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.648375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.648385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.648578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.648589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.648738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.648751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.648881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.648891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.649010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.649020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.649141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.649151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.649246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.649256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.649443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.649453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.649587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.915 [2024-07-15 23:53:17.649597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.915 qpair failed and we were unable to recover it. 00:27:28.915 [2024-07-15 23:53:17.649724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.649735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.649934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.649945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.650089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.650099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.650284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.650295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.650427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.650438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.650626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.650636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.650829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.650839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.651055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.651065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.651191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.651201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.651401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.651412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.651595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.651605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.651860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.651871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.652063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.652074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.652213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.652223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.652473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.652484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.652676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.652686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.652889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.652900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.653089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.653100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.653228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.653238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.653432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.653443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.653575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.653586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.653703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.653713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.653903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.653916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.654048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.654059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.654184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.654194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.654390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.654401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.654531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.654541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.654704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.654714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.654826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.654836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.655018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.655028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.655157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.655168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.655429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.655441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.655576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.655587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.655726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.655737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.655934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.655944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.656057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.656068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.656268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.656279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.656414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.656424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.656551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.656561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.656676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.916 [2024-07-15 23:53:17.656686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.916 qpair failed and we were unable to recover it. 00:27:28.916 [2024-07-15 23:53:17.656954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.656965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.657212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.657223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.657363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.657373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.657491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.657503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.657633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.657644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.657788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.657798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.657880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.657890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.658185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.658195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.658412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.658423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.658562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.658574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.658804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.658815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.658943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.658953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.659202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.659212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.659339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.659350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.659488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.659499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.659703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.659714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.659828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.659838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.660042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.660052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.660182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.660192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.660488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.660499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.660699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.660709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.660828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.660838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.660961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.660971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.661100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.661110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.661243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.661254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.661375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.661386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.661565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.661575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.661839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.661849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.661962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.661973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.662095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.662105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.917 qpair failed and we were unable to recover it. 00:27:28.917 [2024-07-15 23:53:17.662221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.917 [2024-07-15 23:53:17.662235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.662355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.662365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.662500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.662511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.662638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.662648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.662843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.662853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.662970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.662980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.663167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.663177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.663299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.663310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.663512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.663523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.663708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.663718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.663844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.663856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.664066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.664075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.664257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.664268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.664451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.664462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.664577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.664587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.664703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.664713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.664837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.664847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.664967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.664978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.665236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.665247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.665387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.665400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.665601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.665611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.665858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.665868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.665981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.665992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.666172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.666182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.666451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.666462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.666667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.666678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.666806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.666817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.666943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.666953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.667078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.667089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.667332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.667343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.667530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.667541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.667689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.667700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.667835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.667845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.667996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.668006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.668140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.668149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.668427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.668438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.668574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.668584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.668721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.668731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.668859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.668870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.669058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.669068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.669215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.918 [2024-07-15 23:53:17.669229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.918 qpair failed and we were unable to recover it. 00:27:28.918 [2024-07-15 23:53:17.669366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.669377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.669568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.669578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.669761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.669772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.669899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.669910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.670036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.670046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.670268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.670278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.670482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.670492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.670633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.670644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.670826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.670836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.670961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.670971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.671087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.671097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.671212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.671222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.671431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.671441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.671572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.671582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.671708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.671718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.671862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.671872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.672083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.672093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.672234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.672245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.672402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.672415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.672529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.672539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.672722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.672733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.672874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.672885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.673002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.673012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.673163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.673173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.673425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.673436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.673550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.673561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.673812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.673824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.673941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.673951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.674092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.674102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.674369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.674380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.674489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.674500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.674631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.674642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.674767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.674778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.674996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.675006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.675186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.675196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.919 [2024-07-15 23:53:17.675409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.919 [2024-07-15 23:53:17.675420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.919 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.675628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.675638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.675779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.675790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.675899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.675909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.676090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.676100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.676331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.676341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.676542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.676553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.676665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.676675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.676803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.676813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.676989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.677000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.677143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.677153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.677345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.677356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.677474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.677484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.677668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.677679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.677798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.677809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.677992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.678002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.678193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.678203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.678275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.678286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.678537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.678547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.678678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.678688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.678881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.678891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.679070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.679080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.679264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.679274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.679476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.679489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.679696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.679707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.679827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.679837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.680102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.680112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.680319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.680330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.680462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.680472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.680540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.680550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.680799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.680809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.681076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.681086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.920 qpair failed and we were unable to recover it. 00:27:28.920 [2024-07-15 23:53:17.681173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.920 [2024-07-15 23:53:17.681183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.681363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.681374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.681621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.681631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.681899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.681910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.682114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.682124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.682316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.682327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.682518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.682528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.682718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.682728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.682909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.682919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.683067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.683077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.683340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.683351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.683477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.683487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.683603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.683613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.683750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.683760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.683883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.683893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.684023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.684032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.684259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.684270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.684459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.684469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.684584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.684594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.684868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.684878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.685024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.685034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.685215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.685229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.685373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.685384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.685563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.685572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.685766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.685775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.685908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.685918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.686094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.686104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.686293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.686303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.686427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.686437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.686565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.686575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.686779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.921 [2024-07-15 23:53:17.686789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.921 qpair failed and we were unable to recover it. 00:27:28.921 [2024-07-15 23:53:17.686917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.686929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.687125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.687135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.687262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.687273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.687395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.687405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.687525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.687535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.687677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.687686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.687812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.687822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.687951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.687961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.688978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.688988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.689127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.689137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.689272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.689282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.689483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.689493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.689628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.689638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.689818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.689828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.689957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.689967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.922 [2024-07-15 23:53:17.690150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.922 [2024-07-15 23:53:17.690161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.922 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.690295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.690306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.690514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.690524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.690661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.690670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.690808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.690818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.690950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.690960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.691135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.691145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.691272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.691282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.691410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.691420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.691620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.691630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.691760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.691770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.691909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.691919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.692125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.692135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.692312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.692323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.692441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.692452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.692572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.692582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.692701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.692711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.692832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.692842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.693032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.693044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.693161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.693171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.693363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.693373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.693497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.693507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.693621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.693631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.693760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.693770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.693893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.693904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.694082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.694092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.694204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.694214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.694341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.694351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.694471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.694481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.694597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.694607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.694788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.694798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.694981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.923 [2024-07-15 23:53:17.694991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.923 qpair failed and we were unable to recover it. 00:27:28.923 [2024-07-15 23:53:17.695116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.695126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.695240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.695250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.695365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.695375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.695489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.695499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.695710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.695720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.695834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.695844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.695972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.695982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.696094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.696104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.696233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.696243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.696381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.696391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.696500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.696510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.696588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.696597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.696785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.696795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.696930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.696940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.697048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.697058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.697174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.697184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.697367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.697377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.697491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.697502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.697682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.697692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.697816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.697825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.697961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.697971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.698142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.698152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.698281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.698291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.698438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.698447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.698557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.698567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.698680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.698690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.698819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.698831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.699012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.699021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.699144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.699155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.699306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.924 [2024-07-15 23:53:17.699316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.924 qpair failed and we were unable to recover it. 00:27:28.924 [2024-07-15 23:53:17.699444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.699454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.699572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.699582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.699695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.699705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.699826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.699837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.699950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.699960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.700074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.700084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.700236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.700246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.700362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.700372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.700557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.700567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.700776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.700785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.700870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.700880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.701060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.701069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.701196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.701207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.701390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.701401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.701532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.701541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.701669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.701679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.701845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.701856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.702035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.702045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.702158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.702168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.702305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.702316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.702568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.702578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.702763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.702773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.702903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.702913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.703098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.703108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.703235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.703244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.703359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.703369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.703562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.703572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.703754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.703765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.703954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.703964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.704082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.704091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.704291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.704301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.704425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.704435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.704628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.704638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.704854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.704864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.705094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.705103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.705239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.705249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.705442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.705454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.705541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.705551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.705746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.705756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.925 qpair failed and we were unable to recover it. 00:27:28.925 [2024-07-15 23:53:17.705939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.925 [2024-07-15 23:53:17.705949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.706130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.706140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.706322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.706332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.706406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.706416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.706625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.706636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.706833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.706842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.706978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.706988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.707117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.707127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.707258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.707268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.707382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.707392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.707527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.707537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.707718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.707729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.707922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.707933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.708057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.708066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.708161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.708171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.708308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.708318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.708449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.708460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.708656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.708667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.708849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.708858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.708999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.709009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.709127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.709137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.709410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.709420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.709666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.709676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.709792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.709802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.709993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.710013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.710150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.710165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.710377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.710391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.710531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.710544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.710663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.710677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.710798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.710812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.710952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.710966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.711098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.711112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.711299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.711313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.711592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.711605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.711707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.711721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.711851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.711865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.712071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.712085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.712230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.712247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.712384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.712399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.712624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.712638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.712769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.926 [2024-07-15 23:53:17.712783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.926 qpair failed and we were unable to recover it. 00:27:28.926 [2024-07-15 23:53:17.712874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.712888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.713022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.713036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.713170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.713183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.713343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.713358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.713488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.713502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.713621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.713635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.713934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.713948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.714231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.714246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.714357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.714370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.714627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.714641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.714788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.714802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.715004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.715017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.715190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.715204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.715418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.715432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.715684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.715698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.715881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.715895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.716053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.716067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.716197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.716211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.716429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.716443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.716579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.716593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.716799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.716813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.717014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.717027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.717256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.717270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.717395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.717423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.717587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.717605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.717727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.717739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.717859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.717869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.718072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.718082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.718193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.718203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.718420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.718430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.718547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.718557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.927 qpair failed and we were unable to recover it. 00:27:28.927 [2024-07-15 23:53:17.718756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.927 [2024-07-15 23:53:17.718766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.718904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.718913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.719047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.719057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.719242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.719253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.719399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.719409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.719604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.719616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.719750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.719760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.719872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.719882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.720061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.720071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.720274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.720284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.720404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.720414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.720597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.720606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.720735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.720745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.720867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.720877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.720996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.721006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.721138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.721148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.721327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.721337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.721454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.721465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.721662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.721672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.721800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.721809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.722026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.722035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.722164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.722174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.722306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.722316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.722563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.722573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.722689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.722698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.722884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.722893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.723007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.723017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.723198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.723208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.723361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.723372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.723555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.723565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.723691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.723701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.723816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.723826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.724103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.724113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.724292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.724303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.724481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.724490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.724698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.724708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.724959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.724969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.725085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.725094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.725218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.725231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.725442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.725452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.928 [2024-07-15 23:53:17.725588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.928 [2024-07-15 23:53:17.725598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.928 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.725728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.725738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.725924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.725933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.726116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.726126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.726308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.726318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.726448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.726460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.726662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.726673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.726885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.726895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.727034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.727044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.727231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.727241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.727377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.727387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.727509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.727519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.727657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.727667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.727865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.727875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.728056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.728066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.728192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.728202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.728345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.728356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.728473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.728483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.728678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.728688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.728822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.728832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.728981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.728991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.729181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.729191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.729373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.729383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.729567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.729577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.729776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.729786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.729900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.729910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.730116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.730126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.730308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.730318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.730455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.730466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.730665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.730675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.730943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.730953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.731151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.731162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.731296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.731312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.731576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.731589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.731797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.731811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.731962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.731976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.732117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.732131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.732318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.732334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.732467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.732482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.732611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.732625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.732906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.732919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.929 [2024-07-15 23:53:17.733051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.929 [2024-07-15 23:53:17.733065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.929 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.733199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.733213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.733411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.733424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.733569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.733583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.733727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.733743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.733998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.734012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.734145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.734158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.734317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.734331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.734470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.734484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.734641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.734655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.734793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.734806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.734954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.734967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.735104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.735118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.735239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.735253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.735381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.735395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.735602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.735615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.735792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.735805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.736003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.736017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.736235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.736249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.736437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.736451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.736583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.736597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.736737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.736750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.736883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.736896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.737031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.737044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.737167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.737181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.737368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.737382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.737485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.737498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.737624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.737638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.737771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.737785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.737926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.737939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.738130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.738143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.738285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.738301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.738438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.738452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.738753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.738767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.738911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.738925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.739117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.739131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.739280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.739294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.739439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.739453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.739584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.739597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.739723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.739736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.739864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.739877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.930 qpair failed and we were unable to recover it. 00:27:28.930 [2024-07-15 23:53:17.739988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.930 [2024-07-15 23:53:17.740002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.740139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.740153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.740365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.740379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.740508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.740521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.740665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.740679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.740817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.740830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.741032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.741046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.741249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.741263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.741394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.741408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.741617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.741631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.741831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.741844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.741988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.742001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.742123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.742137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.742264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.742279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.742420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.742434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.742570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.742584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.742776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.742789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.742987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.743000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.743126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.743139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.743328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.743343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.743468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.743482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.743599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.743612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.743753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.743767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.743951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.743964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.744111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.744124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.744302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.744315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.744440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.744454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.744593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.744607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.744864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.744877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.745015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.745029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.745165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.745180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.745321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.745336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.745456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.745470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.931 [2024-07-15 23:53:17.745605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.931 [2024-07-15 23:53:17.745618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.931 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.745805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.745819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.745959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.745972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.746118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.746132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.746326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.746340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.746477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.746491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.746612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.746626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.746760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.746773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.746962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.746975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.747103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.747117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.747309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.747323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.747581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.747595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.747720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.747733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.747863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.747877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.748017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.748031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.748188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.748202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.748349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.748364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.748508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.748522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.748736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.748751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.748901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.748915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.749111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.749125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.749263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.749278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.749534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.749547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.749671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.749685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.749819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.749832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.749955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.749969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.750157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.750170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.750358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.750372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.750469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.750482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.750589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.750603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.750799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.750813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.750998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.751012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.751145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.751159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.751301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.751316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.751545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.751558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.751680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.751693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.751829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.751843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.752040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.752056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.752187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.752200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.932 qpair failed and we were unable to recover it. 00:27:28.932 [2024-07-15 23:53:17.752345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.932 [2024-07-15 23:53:17.752359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.752486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.752500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.752689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.752704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.752909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.752922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.753071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.753085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.753221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.753238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.753372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.753386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.753515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.753529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.753651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.753665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.753891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.753904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.754035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.754049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.754262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.754276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.754417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.754431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.754566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.754577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.754705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.754715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.754969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.754979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.755159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.755169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.755320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.755331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.755525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.755534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.755724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.755734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.755849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.755859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.755972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.755982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.756121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.756131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.756205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.756214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.756414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.756424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.756542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.756553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.756685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.756695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.756842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.756852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.756962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.756972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.757096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.757106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.757302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.757312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.757426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.757436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.757569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.757580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.757716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.757726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.757849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.757859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.758086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.758096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.758207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.758217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.758413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.758424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.758545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.758556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.758750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.933 [2024-07-15 23:53:17.758760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.933 qpair failed and we were unable to recover it. 00:27:28.933 [2024-07-15 23:53:17.758877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.758887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.759011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.759020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.759140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.759150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.759277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.759287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.759411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.759421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.759540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.759550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.759766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.759776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.759962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.759972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.760087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.760097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.760292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.760302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.760411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.760421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.760546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.760556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.760691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.760701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.760822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.760832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.760966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.760975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.761107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.761117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.761264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.761274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.761395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.761405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.761525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.761536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.761651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.761661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.761796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.761807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.761922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.761932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.762983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.762993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.763106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.763116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.763297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.763307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.763425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.763435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.763559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.763569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.763689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.763698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.763817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.763827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.763943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.763953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.764205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.764215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.764337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.764351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.764471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.764480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.764593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.934 [2024-07-15 23:53:17.764603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-15 23:53:17.764724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.764734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.764914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.764924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.765972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.765982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.766109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.766119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.766248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.766258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.766441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.766451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.766631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.766641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.766770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.766779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.766899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.766909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.767018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.767028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.767137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.767147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.767270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.767280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.767393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.767403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.767604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.767614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.767728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.767738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.767854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.767864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.768108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.768118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.768302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.768313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.768568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.768578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.768712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.768722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.768840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.768851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.768976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.768986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.769188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.769199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.769327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.769338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.769525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.769535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.769774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.769784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.769911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.769921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.770037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.770047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.770169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.770179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.770297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.770307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.770449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.770461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.770643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.770653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.770832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.770842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-15 23:53:17.771031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.935 [2024-07-15 23:53:17.771041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.771211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.771221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.771371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.771381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.771496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.771506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.771684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.771694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.771882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.771892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.772011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.772021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.772128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.772138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.772327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.772337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.772460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.772471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.772615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.772625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.772745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.772755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.772880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.772889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.773071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.773081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.773201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.773212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.773394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.773404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.773535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.773545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.773722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.773732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.773915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.773925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.774105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.774115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.774250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.774260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.774381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.774392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.774585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.774596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.774731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.774741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.774944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.774954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.775076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.775086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.775207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.775217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.775334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.775344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.775532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.775542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.775725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.775735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.775865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.775875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.775989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.775999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.776133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.776144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.776281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.776292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-15 23:53:17.776478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.936 [2024-07-15 23:53:17.776488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.776683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.776693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.776871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.776882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.777061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.777073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.777274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.777284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.777474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.777484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.777598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.777608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.777742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.777752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.777885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.777895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.778088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.778097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.778284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.778294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.778494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.778504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.778685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.778695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.778892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.778902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.779090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.779101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.779279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.779289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.779418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.779427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.779616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.779627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.779808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.779818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.780001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.780011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.780143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.780153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.780338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.780349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.780463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.780474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.780660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.780670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.780794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.780804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.780928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.780938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.781066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.781076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.781255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.781265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.781389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.781399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.781539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.781549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.781685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.781695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.781817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.781827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.781944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.781954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.782133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.782143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.782259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.782270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.782390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.782400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.782591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.782601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.782732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.782742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.782995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.783004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.783191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.783200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.783330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.937 [2024-07-15 23:53:17.783340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.937 qpair failed and we were unable to recover it. 00:27:28.937 [2024-07-15 23:53:17.783519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.783530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.783669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.783679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.783927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.783939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.784124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.784134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.784233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.784244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.784438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.784448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.784573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.784583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.784776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.784786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.784982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.784992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.785235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.785246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.785335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.785345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.785469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.785478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.785599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.785609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.785791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.785800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.785921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.785931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.786051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.786061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.786181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.786192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.786379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.786389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.786520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.786530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.786658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.786667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.786847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.786858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.786978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.786989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.787102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.787111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.787248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.787258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.787396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.787406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.787539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.787549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.787676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.787686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.787863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.787874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.787982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.787992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.788078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.788099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.788242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.788258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.788465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.788480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.788604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.788618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.788763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.788776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.788978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.788993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.789200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.789214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.789344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.789358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.789612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.789626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.789758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.789772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.789911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.789925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.938 [2024-07-15 23:53:17.790073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.938 [2024-07-15 23:53:17.790087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.938 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.790221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.790240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.790363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.790377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.790514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.790780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.790794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.790915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.790929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.791205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.791219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.791439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.791453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.791639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.791655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.791791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.791804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.791948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.791962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.792067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.792080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.792423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.792440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.792641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.792654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.792801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.792815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.792944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.792958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.793160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.793172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.793317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.793327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.793528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.793538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.793658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.793668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.793787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.793797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.793890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.793900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.794029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.794039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.794156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.794165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.794287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.794297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.794546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.794556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.794685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.794695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.794874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.794884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.795002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.795012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.795149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.795159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.795301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.795312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.795443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.795453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.795649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.795658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.795910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.795920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.796051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.796061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.796242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.796252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.796384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.796395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.796511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.796521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.796709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.796720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.796834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.796844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.797045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.797055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.939 [2024-07-15 23:53:17.797235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.939 [2024-07-15 23:53:17.797245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.939 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.797366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.797376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.797511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.797521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.797634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.797643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.797771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.797781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.797981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.797991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.798191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.798200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.798489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.798500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.798639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.798649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.798829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.798839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.799018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.799028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.799151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.799161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.799425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.799435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.799628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.799639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.799823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.799833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.799976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.799988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.800111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.800121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.800313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.800324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.800504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.800513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.800626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.800636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.800888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.800898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.801077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.801087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.801290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.801301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.801488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.801498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.801692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.801702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.801816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.801826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.801970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.801979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.802110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.802120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.802264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.802274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.802408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.802418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.802548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.802559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.802698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.802708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.802827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.802836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.802958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.802968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.803087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.803097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.940 qpair failed and we were unable to recover it. 00:27:28.940 [2024-07-15 23:53:17.803285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.940 [2024-07-15 23:53:17.803295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.803420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.803429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.803547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.803558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.803671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.803681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.803797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.803808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.804008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.804018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.804124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.804134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.804332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.804342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.804416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.804426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.804549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.804559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.804676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.804686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.804804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.804814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.805022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.805032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.805218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.805232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.805359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.805370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.805505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.805515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.805649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.805659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.805783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.805794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.805935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.805945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.806202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.806211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.806346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.806357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.806607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.806617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.806749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.806758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.806894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.806903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.807026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.807037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.807130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.807140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.807272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.807282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.807481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.807491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.807617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.807627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.807749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.807759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.807875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.807885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.808071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.808081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.808198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.808208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.808407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.808417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.808610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.808621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.808740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.808751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.808976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.808986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.809091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.809101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.809287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.809297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.809430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.809440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.809662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.941 [2024-07-15 23:53:17.809672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.941 qpair failed and we were unable to recover it. 00:27:28.941 [2024-07-15 23:53:17.809849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.809859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.809981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.809990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.810121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.810132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.810264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.810274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.810404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.810414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.810597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.810606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.810743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.810754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.810873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.810883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.811067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.811077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.811276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.811287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.811467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.811477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.811599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.811608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.811791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.811801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.811987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.811997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.812213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.812223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.812415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.812426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.812550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.812561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.812690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.812700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.812832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.812842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.813044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.813056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.813183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.813193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.813373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.813382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.813526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.813536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.813649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.813659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.813914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.813923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.814051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.814062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.814310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.814321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.814456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.814467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.814621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.814631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.814828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.814838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.815029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.815039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.815162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.815172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.815305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.815315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.815438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.815448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.815659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.815668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.815797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.815808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.815937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.815947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.816126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.816136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.816257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.816268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.816511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.816521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.816617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.942 [2024-07-15 23:53:17.816627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.942 qpair failed and we were unable to recover it. 00:27:28.942 [2024-07-15 23:53:17.816759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.816770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.816966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.816977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.817179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.817189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.817365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.817375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.817580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.817590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.817719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.817728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.817851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.817862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:28.943 [2024-07-15 23:53:17.817979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.817990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.818201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.818211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:27:28.943 [2024-07-15 23:53:17.818362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.818373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.818494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.818504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.943 [2024-07-15 23:53:17.818624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.818634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.818748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.818758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.943 [2024-07-15 23:53:17.818894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.818905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.819015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.819025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.943 [2024-07-15 23:53:17.819203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.819213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.819339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.819352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.819477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.819487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.819681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.819692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.819879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.819890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.820099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.820110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.820294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.820305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.820509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.820519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.820756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.820766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.820952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.820962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.821165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.821176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.821308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.821318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.821457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.821468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.821600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.821610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.821741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.821751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.821890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.821901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.822089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.822099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.822222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.822235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.822459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.822470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.822677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.822687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.822806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.822817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.822952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.822964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.823093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.823103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.943 [2024-07-15 23:53:17.823237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.943 [2024-07-15 23:53:17.823248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.943 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.823483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.823493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.823632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.823643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.823831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.823841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.824039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.824049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.824257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.824267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.824389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.824399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.824515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.824524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.824605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.824614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.824801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.824812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.824951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.824961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.825094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.825104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.825240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.825251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.825368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.825380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.825502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.825513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.825645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.825655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.825856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.825866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.825989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.826000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.826120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.826132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.826315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.826325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.826512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.826522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.826637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.826648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.826791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.826802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.826981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.826991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.827183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.827194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.827317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.827328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.827524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.827534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.827659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.827670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.827789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.827799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.828004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.828014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.828137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.828147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.828332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.828343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.828491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.828503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.828698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.828709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.944 [2024-07-15 23:53:17.828842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.944 [2024-07-15 23:53:17.828853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.944 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.828981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.828992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.829116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.829126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.829258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.829268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.829459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.829470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.829600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.829610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.829747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.829757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.829872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.829883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.830006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.830018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.830149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.830159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.830340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.830353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.830491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.830502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.830618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.830632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.830769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.830780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.830900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.830911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.831042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.831053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.831172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.831183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.831308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.831319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.831499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.831509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.831632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.831642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.831761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.831771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.831904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.831914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.832030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.832040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.832157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.832166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.832295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.832307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.832441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.832451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.832631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.832641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.832720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.832731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.945 [2024-07-15 23:53:17.832913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.945 [2024-07-15 23:53:17.832923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.945 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.833124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.833134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.833270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.833281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.833408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.833419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.833530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.833540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.833651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.833661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.833872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.833882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.834062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.834073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.834221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.834234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.834345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.834355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.834475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.834486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.834719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.834729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.834857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.834867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.834987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.834997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.835140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.835150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.835293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.835304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.835417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.835428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.835607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.835617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.835809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.835819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.835941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.835951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.836062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.836073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.836222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.836237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.836359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.836370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.836486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.836497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.836613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.836623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.836745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.836755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.836943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.836954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.837070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.837080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.837197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.837206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.837352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.837363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.837480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.837490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.837607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.837617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.837808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.837819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.837898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.837909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.838032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.838043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.838169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.838179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.838359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.838373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.838504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.838514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.838630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.838640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.946 qpair failed and we were unable to recover it. 00:27:28.946 [2024-07-15 23:53:17.838779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.946 [2024-07-15 23:53:17.838789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.838916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.838926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.839047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.839058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.839189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.839199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.839338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.839349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.839468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.839479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.839618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.839629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.839808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.839819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.839939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.839949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.840148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.840158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.840293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.840304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.840441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.840451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.840566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.840576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.840696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.840706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.840831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.840842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.840962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.840972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.841106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.841116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.841267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.841278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.841405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.841416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.841538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.841549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.841665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.841675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.841862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.841873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.841993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.842003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.842252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.842264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.842396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.842418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.842565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.842579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.842716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.842730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.842862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.842877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.843055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.843069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.843202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.843217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.843416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.843430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.843561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.843576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.843720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.843735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.843925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.843939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.844128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.844143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.844241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.844255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.844475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.844490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.844614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.844631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.844761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.844775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.947 [2024-07-15 23:53:17.844905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.947 [2024-07-15 23:53:17.844919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.947 qpair failed and we were unable to recover it. 00:27:28.948 [2024-07-15 23:53:17.845047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.948 [2024-07-15 23:53:17.845061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.948 qpair failed and we were unable to recover it. 00:27:28.948 [2024-07-15 23:53:17.845207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.948 [2024-07-15 23:53:17.845222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.948 qpair failed and we were unable to recover it. 00:27:28.948 [2024-07-15 23:53:17.845362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.948 [2024-07-15 23:53:17.845376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:28.948 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.845503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.845517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.845719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.845735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.845869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.845885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.846005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.846020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.846142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.846158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.846369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.846385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.846515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.846530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.213 [2024-07-15 23:53:17.846660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.213 [2024-07-15 23:53:17.846675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.213 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.846828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.846844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.846979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.846994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.847119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.847134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.847298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.847315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.847444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.847460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.847586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.847602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.847867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.847883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.848026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.848041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.848167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.848182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.848306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.848322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.848458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.848474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.848607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.848622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.848823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.848838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.848989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.849021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.849169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.849185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.849315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.849332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.849467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.849482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.849672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.849687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.849814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.849829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.849964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.849979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.850111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.850130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.850344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.850368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.850518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.850535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.850686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.850702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.850835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.850850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.851062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.851078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.851203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.851218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.851382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.851397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.851546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.851562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.851699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.851716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.851840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.851856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.852059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.852074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.852267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.852283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.214 qpair failed and we were unable to recover it. 00:27:29.214 [2024-07-15 23:53:17.852417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.214 [2024-07-15 23:53:17.852433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.852563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.852578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.852719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.852735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.852871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.852887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.853021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.853037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.853134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.853149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.853286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.853303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.853453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.853471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.853657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.853672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.853857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.853875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.215 [2024-07-15 23:53:17.854002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.854018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.854144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.854159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:29.215 [2024-07-15 23:53:17.854352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.854371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.854505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.854521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.854660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:29.215 [2024-07-15 23:53:17.854677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.854808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.854823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.215 [2024-07-15 23:53:17.854962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.854979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.855129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.855144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.855278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.855293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.855446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.855462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.855584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.855599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.855725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.855740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.855872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.855887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.856091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.856105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.856244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.856260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.856388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.856404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.856529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.856545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.856665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.856679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.215 qpair failed and we were unable to recover it. 00:27:29.215 [2024-07-15 23:53:17.856806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.215 [2024-07-15 23:53:17.856821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.856948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.856963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.857153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.857169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.857309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.857325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.857446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.857461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.857602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.857617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.857813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.857829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.858019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.858035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.858168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.858183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.858311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.858327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.858460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.858476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.858600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.858615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.858742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.858756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.858890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.858905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.859026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.859040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.859177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.859192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.859319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.859335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.859460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.859475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.859604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.859622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.859813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.859827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.859951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.859966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.860094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.860109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.860302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.860319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.860453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.860468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.860601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.860617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.860738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.860753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.860894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.860909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.861095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.861110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.861242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.861258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.861380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.861396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.861520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.861535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.861736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.216 [2024-07-15 23:53:17.861755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.216 qpair failed and we were unable to recover it. 00:27:29.216 [2024-07-15 23:53:17.861948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.861964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.862088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.862103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.862237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.862254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.862379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.862394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.862586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.862603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.862736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.862752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.862878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.862893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.863021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.863036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.863161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.863178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.863316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.863332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.863518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.863534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.863734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.863751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.863878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.863893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.864041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.864058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.864201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.864216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.864417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.864433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.864626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.864642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.864780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.864795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.864932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.864949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.865075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.865091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.865285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.865302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.865492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.865508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.865701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.865717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.865878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.865895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.866030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.866047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.866183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.866199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe40000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.866409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.866435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.866528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.866544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.866731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.866747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.866882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.866898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.867025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.217 [2024-07-15 23:53:17.867041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.217 qpair failed and we were unable to recover it. 00:27:29.217 [2024-07-15 23:53:17.867169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.867186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.867374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.867391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.867524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.867540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.867680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.867695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.867824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.867839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.868058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.868073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.868215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.868237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.868388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.868405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.868533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.868554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.868749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.868764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.868965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.868981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.869114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.869130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.869319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.869336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.869540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.869556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.869692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.869709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.869905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.869920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.870112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.870129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.870318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.870335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.870527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.870543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.870685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.870702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.870828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.870843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.871048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.871063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.871188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.871204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.871415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.871432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.871557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.871572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.871719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.871736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.871873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.871889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.872021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.872037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.872163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.872179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.218 [2024-07-15 23:53:17.872378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.218 [2024-07-15 23:53:17.872395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.218 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.872549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.872565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.872759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.872776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.872900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.872915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.873042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.873058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.873264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.873280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.873542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.873564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.873695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.873707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.873833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.873845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.874025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.874036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.874284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.874296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.874415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.874426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.874542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.874553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.874730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.874741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.874933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.874945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.875141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.875152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.875292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.875304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.875438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.875449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 Malloc0 00:27:29.219 [2024-07-15 23:53:17.875665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.875678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.875801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.875815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.876008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.876020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.219 qpair failed and we were unable to recover it. 00:27:29.219 [2024-07-15 23:53:17.876135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.219 [2024-07-15 23:53:17.876146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.876277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.876290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:29.220 [2024-07-15 23:53:17.876522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.876534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.876668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.876680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:29.220 [2024-07-15 23:53:17.876958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.876970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:29.220 [2024-07-15 23:53:17.877171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.877184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.877303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.877317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.220 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.877473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.877495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.877704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.877719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.877872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.877887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.878087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.878105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.878265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.878283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.878415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.878431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.878655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.878670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.878800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.878815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.879020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.879035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.879146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.879161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.879429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.879446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.879645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.879660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.879867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.879881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.880015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.880030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.880160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.880175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.880380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.880396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.880613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.880628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.880771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.880787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.880913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.880928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.881054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.881069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.881274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.881291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.881477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.881492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.220 qpair failed and we were unable to recover it. 00:27:29.220 [2024-07-15 23:53:17.881645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.220 [2024-07-15 23:53:17.881659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.881852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.881866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.882075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.882090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.882291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.882307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.882440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.882455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.882577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.882591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.882736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.882751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.882896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.882912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.883030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.883048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.883194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.883194] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.221 [2024-07-15 23:53:17.883210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.883355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.883368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.883510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.883525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.883728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.883742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.883944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.883959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.884147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.884161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.884298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.884313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.884455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.884470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.884665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.884680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.884810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.884825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.885016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.885031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.885154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.885169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.885308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.885322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.885478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.885492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.885619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.885634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.885784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.885798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.885992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.886007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.886136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.886151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.886292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.886307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.886499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.886514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.886727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.886742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.221 [2024-07-15 23:53:17.886929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.221 [2024-07-15 23:53:17.886944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.221 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.887086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.887101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.887206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.887221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.887427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.887442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.887654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.887668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.887809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.887824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.887940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.887951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.888076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.888088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.888217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.888235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.888347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.888359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.888542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.888553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:29.222 [2024-07-15 23:53:17.888683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.888695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.888898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.888909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.889022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.222 [2024-07-15 23:53:17.889034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.889214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.889231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:29.222 [2024-07-15 23:53:17.889360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.889372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.889581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.889594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.222 [2024-07-15 23:53:17.889811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.889828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.889954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.889970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.890173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.890189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.890330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.890345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.890534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.890549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.890685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.890700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.890890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.890905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.891101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.891116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.891253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.891268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.891415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.891429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.891565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.891581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.891787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.891802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.222 [2024-07-15 23:53:17.891936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.222 [2024-07-15 23:53:17.891951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.222 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.892082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.892100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.892289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.892304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.892438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.892453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.892581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.892596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.892742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.892757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.892909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.892924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.893129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.893144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.893348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.893363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.893498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.893513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.893704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.893719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.893981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.893997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.894120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.894135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.894261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.894277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.894424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.894440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.894727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.894742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.894877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.894892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.895029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.895044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.895180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.895194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.895455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.895471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.895693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.895708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.895899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.895914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.896122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.896137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.896340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.896355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.896491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.896506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.896641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.896656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.896799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.896813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.897010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.897025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.897216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.897243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.897449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.223 [2024-07-15 23:53:17.897464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.223 qpair failed and we were unable to recover it. 00:27:29.223 [2024-07-15 23:53:17.897740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.897756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.897879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.897895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.898068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.898084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.898237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.898253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.898532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.898547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.898676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.898691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.898894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.898909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.899050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.899065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.899184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.899200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.899391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.899556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.899571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.899736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.899752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.899936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.899951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.900158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.900174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.900372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.900387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.900517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.900532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.900616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.900631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:29.224 [2024-07-15 23:53:17.900852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.900868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:29.224 [2024-07-15 23:53:17.901068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.901083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.901205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.901220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:29.224 [2024-07-15 23:53:17.901366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.901382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.901523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.901538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.224 [2024-07-15 23:53:17.901736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.901752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.901881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.901896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.902101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.902116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.902308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.902324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.902511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.902527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.902718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.902734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.902890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.902904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.224 [2024-07-15 23:53:17.903006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.224 [2024-07-15 23:53:17.903021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.224 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.903147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.903163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.903287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.903302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.903557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.903572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.903784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.903799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.904002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.904017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.904212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.904243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.904398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.904413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.904536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.904554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.904680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.904696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.904842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.904857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.905049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.905063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.905191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.905207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.905333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.905348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.905563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.905577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.905713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.905728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.905860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.905875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.906009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.906024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.906150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.906164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.906401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.906416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.906542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.906556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.906778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.906793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.906985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.907001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.907196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.907211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d1ed0 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.907425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.225 [2024-07-15 23:53:17.907446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe50000b90 with addr=10.0.0.2, port=4420 00:27:29.225 qpair failed and we were unable to recover it. 00:27:29.225 [2024-07-15 23:53:17.907576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.907589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.907774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.907785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.907963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.907975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.908110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.908121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.908258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.908269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.908413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.908425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.908675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.908687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:29.226 [2024-07-15 23:53:17.908827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.908840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.908969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.908980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.226 [2024-07-15 23:53:17.909117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.909130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.909256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.909269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:29.226 [2024-07-15 23:53:17.909469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.909482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.909665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.909677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.909873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.909885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.910077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.910088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.910235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.910250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.910400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.910411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.910562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.910574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.910755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.910768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.910892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.910903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.911017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.911028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.911149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.911160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.911350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.911362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.911552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.911563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.911674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.911685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.911877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.911889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.912001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.912012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.912204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:29.226 [2024-07-15 23:53:17.912202] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.226 [2024-07-15 23:53:17.912216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fbe48000b90 with addr=10.0.0.2, port=4420 00:27:29.226 qpair failed and we were unable to recover it. 00:27:29.226 [2024-07-15 23:53:17.913756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.226 [2024-07-15 23:53:17.913878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.913897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.913905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.913913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.913934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:29.227 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:29.227 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:27:29.227 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.227 [2024-07-15 23:53:17.923774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.923900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.923919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.923927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.923937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.923954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:27:29.227 23:53:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1164293 00:27:29.227 [2024-07-15 23:53:17.933719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.933794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.933810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.933817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.933823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.933839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:17.943681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.943766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.943781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.943789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.943795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.943810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:17.953796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.953890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.953906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.953914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.953921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.953938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:17.963747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.963900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.963918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.963926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.963933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.963950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:17.973795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.973868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.973884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.973892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.973898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.973913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:17.983754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.983840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.983855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.983862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.983868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.983883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:17.993777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:17.993857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:17.993873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:17.993880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:17.993886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:17.993901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:18.003860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:18.003945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:18.003960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:18.003968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:18.003974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:18.003989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:18.013922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.227 [2024-07-15 23:53:18.013992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.227 [2024-07-15 23:53:18.014012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.227 [2024-07-15 23:53:18.014020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.227 [2024-07-15 23:53:18.014027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.227 [2024-07-15 23:53:18.014043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.227 qpair failed and we were unable to recover it. 00:27:29.227 [2024-07-15 23:53:18.023910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.023982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.023997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.024005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.024011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.024026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.033970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.034047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.034063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.034071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.034077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.034092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.043974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.044047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.044063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.044071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.044077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.044092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.053983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.054071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.054087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.054094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.054100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.054119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.064066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.064135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.064149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.064156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.064162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.064177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.073993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.074072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.074087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.074095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.074101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.074116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.084089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.084158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.084173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.084180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.084187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.084202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.094131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.094201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.094216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.094227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.094234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.094249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.104136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.104207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.104230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.104238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.104243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.104259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.114197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.114318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.114334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.114342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.114348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.114364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.124183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.124257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.228 [2024-07-15 23:53:18.124272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.228 [2024-07-15 23:53:18.124280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.228 [2024-07-15 23:53:18.124286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.228 [2024-07-15 23:53:18.124300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.228 qpair failed and we were unable to recover it. 00:27:29.228 [2024-07-15 23:53:18.134299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.228 [2024-07-15 23:53:18.134428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.229 [2024-07-15 23:53:18.134443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.229 [2024-07-15 23:53:18.134450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.229 [2024-07-15 23:53:18.134456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.229 [2024-07-15 23:53:18.134472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.229 qpair failed and we were unable to recover it. 00:27:29.229 [2024-07-15 23:53:18.144196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.229 [2024-07-15 23:53:18.144278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.229 [2024-07-15 23:53:18.144293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.229 [2024-07-15 23:53:18.144300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.229 [2024-07-15 23:53:18.144310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.229 [2024-07-15 23:53:18.144325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.229 qpair failed and we were unable to recover it. 00:27:29.229 [2024-07-15 23:53:18.154317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.229 [2024-07-15 23:53:18.154386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.229 [2024-07-15 23:53:18.154402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.229 [2024-07-15 23:53:18.154409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.229 [2024-07-15 23:53:18.154415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.229 [2024-07-15 23:53:18.154431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.229 qpair failed and we were unable to recover it. 00:27:29.229 [2024-07-15 23:53:18.164435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.229 [2024-07-15 23:53:18.164513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.229 [2024-07-15 23:53:18.164528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.229 [2024-07-15 23:53:18.164536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.229 [2024-07-15 23:53:18.164542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.229 [2024-07-15 23:53:18.164557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.229 qpair failed and we were unable to recover it. 00:27:29.229 [2024-07-15 23:53:18.174420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.229 [2024-07-15 23:53:18.174494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.229 [2024-07-15 23:53:18.174510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.229 [2024-07-15 23:53:18.174517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.229 [2024-07-15 23:53:18.174523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.229 [2024-07-15 23:53:18.174538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.229 qpair failed and we were unable to recover it. 00:27:29.489 [2024-07-15 23:53:18.184386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.489 [2024-07-15 23:53:18.184465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.489 [2024-07-15 23:53:18.184480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.489 [2024-07-15 23:53:18.184487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.489 [2024-07-15 23:53:18.184493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.489 [2024-07-15 23:53:18.184508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.489 qpair failed and we were unable to recover it. 00:27:29.489 [2024-07-15 23:53:18.194503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.489 [2024-07-15 23:53:18.194582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.489 [2024-07-15 23:53:18.194597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.489 [2024-07-15 23:53:18.194604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.489 [2024-07-15 23:53:18.194610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.489 [2024-07-15 23:53:18.194625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.489 qpair failed and we were unable to recover it. 00:27:29.489 [2024-07-15 23:53:18.204473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.489 [2024-07-15 23:53:18.204554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.489 [2024-07-15 23:53:18.204569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.489 [2024-07-15 23:53:18.204576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.489 [2024-07-15 23:53:18.204582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.489 [2024-07-15 23:53:18.204597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.489 qpair failed and we were unable to recover it. 00:27:29.489 [2024-07-15 23:53:18.214525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.489 [2024-07-15 23:53:18.214599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.489 [2024-07-15 23:53:18.214614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.489 [2024-07-15 23:53:18.214622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.489 [2024-07-15 23:53:18.214628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.489 [2024-07-15 23:53:18.214643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.489 qpair failed and we were unable to recover it. 00:27:29.489 [2024-07-15 23:53:18.224538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.489 [2024-07-15 23:53:18.224618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.489 [2024-07-15 23:53:18.224633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.489 [2024-07-15 23:53:18.224641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.489 [2024-07-15 23:53:18.224647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.489 [2024-07-15 23:53:18.224662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.489 qpair failed and we were unable to recover it. 00:27:29.489 [2024-07-15 23:53:18.234554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.489 [2024-07-15 23:53:18.234668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.489 [2024-07-15 23:53:18.234684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.489 [2024-07-15 23:53:18.234692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.489 [2024-07-15 23:53:18.234704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.234720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.244615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.244683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.244699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.244706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.244712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.244728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.254648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.254753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.254770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.254778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.254784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.254800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.264634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.264711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.264727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.264734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.264741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.264756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.274654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.274724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.274739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.274746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.274753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.274768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.284665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.284745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.284761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.284768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.284774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.284790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.294712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.294782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.294797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.294804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.294810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.294825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.304722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.304797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.304814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.304821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.304828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.304844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.314766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.314846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.314861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.314868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.314874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.314889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.324794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.324867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.324882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.324893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.324899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.324914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.334815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.334900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.334917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.334925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.334931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.334947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.344846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.344931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.344946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.344954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.344960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.344975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.354932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.355019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.355034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.355041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.355048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.355063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.364924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.365027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.365042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.365050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.365056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.365072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.374975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.490 [2024-07-15 23:53:18.375048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.490 [2024-07-15 23:53:18.375063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.490 [2024-07-15 23:53:18.375070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.490 [2024-07-15 23:53:18.375076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.490 [2024-07-15 23:53:18.375092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.490 qpair failed and we were unable to recover it. 00:27:29.490 [2024-07-15 23:53:18.384916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.384987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.385002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.385010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.385016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.385031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.491 [2024-07-15 23:53:18.394968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.395037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.395052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.395059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.395066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.395080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.491 [2024-07-15 23:53:18.405059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.405155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.405169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.405177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.405184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.405199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.491 [2024-07-15 23:53:18.415109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.415182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.415200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.415208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.415214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.415232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.491 [2024-07-15 23:53:18.425077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.425161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.425176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.425183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.425189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.425204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.491 [2024-07-15 23:53:18.435101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.435171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.435186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.435193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.435200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.435215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.491 [2024-07-15 23:53:18.445138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.445218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.445239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.445248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.445254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.445269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.491 [2024-07-15 23:53:18.455164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.491 [2024-07-15 23:53:18.455249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.491 [2024-07-15 23:53:18.455264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.491 [2024-07-15 23:53:18.455272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.491 [2024-07-15 23:53:18.455278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.491 [2024-07-15 23:53:18.455296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.491 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.465187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.465329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.465346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.465353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.465360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.465377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.475250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.475330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.475346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.475354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.475360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.475375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.485291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.485398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.485414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.485421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.485427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.485444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.495284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.495347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.495362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.495369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.495375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.495390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.505328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.505405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.505423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.505431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.505437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.505451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.515344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.515418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.515434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.515442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.515448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.515463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.525287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.525352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.525367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.525374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.525381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.525396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.535398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.535508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.535525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.535532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.535538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.535553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.545426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.545496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.545511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.545518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.545527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.545542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.555403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.555479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.555495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.555502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.555509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.752 [2024-07-15 23:53:18.555524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.752 qpair failed and we were unable to recover it. 00:27:29.752 [2024-07-15 23:53:18.565453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.752 [2024-07-15 23:53:18.565521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.752 [2024-07-15 23:53:18.565536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.752 [2024-07-15 23:53:18.565544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.752 [2024-07-15 23:53:18.565550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.565565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.575493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.575561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.575576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.575584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.575590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.575605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.585508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.585579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.585594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.585602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.585608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.585623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.595589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.595672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.595687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.595694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.595700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.595715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.605582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.605652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.605668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.605676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.605682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.605698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.615641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.615708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.615724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.615731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.615737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.615752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.625633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.625704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.625719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.625726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.625732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.625747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.635697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.635768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.635783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.635790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.635799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.635814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.645716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.645829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.645845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.645852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.645859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.645874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.655728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.655801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.655817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.655825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.655831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.655846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.665747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.665816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.665831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.665838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.665844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.665859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.675819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.675887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.675903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.675910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.675916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe48000b90 00:27:29.753 [2024-07-15 23:53:18.675931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.685794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.685883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.685911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.685923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.685934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:29.753 [2024-07-15 23:53:18.685958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.695852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.695931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.695949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.695957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.695964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:29.753 [2024-07-15 23:53:18.695980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.705889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.753 [2024-07-15 23:53:18.705961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.753 [2024-07-15 23:53:18.705979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.753 [2024-07-15 23:53:18.705986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.753 [2024-07-15 23:53:18.705992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:29.753 [2024-07-15 23:53:18.706008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.753 qpair failed and we were unable to recover it. 00:27:29.753 [2024-07-15 23:53:18.715810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.754 [2024-07-15 23:53:18.715886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.754 [2024-07-15 23:53:18.715903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.754 [2024-07-15 23:53:18.715910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.754 [2024-07-15 23:53:18.715916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:29.754 [2024-07-15 23:53:18.715931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.754 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.725937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.726007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.726027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.726038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.726045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.726061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.735951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.736030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.736049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.736057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.736063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.736079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.746031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.746102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.746119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.746128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.746134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.746150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.755994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.756070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.756087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.756095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.756101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.756117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.766034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.766108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.766125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.766132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.766139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.766154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.776056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.776132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.776149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.776156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.776163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.776178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.786026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.786101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.786117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.786125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.786131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.786146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.796125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.013 [2024-07-15 23:53:18.796207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.013 [2024-07-15 23:53:18.796228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.013 [2024-07-15 23:53:18.796236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.013 [2024-07-15 23:53:18.796242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.013 [2024-07-15 23:53:18.796257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.013 qpair failed and we were unable to recover it. 00:27:30.013 [2024-07-15 23:53:18.806154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.806243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.806260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.806267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.806274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.806288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.816185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.816301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.816319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.816330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.816336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.816351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.826219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.826302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.826319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.826328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.826334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.826351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.836263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.836381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.836399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.836406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.836413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.836428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.846269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.846338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.846354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.846361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.846368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.846382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.856285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.856417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.856435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.856442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.856448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.856464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.866394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.866467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.866484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.866491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.866497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.866512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.876351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.876435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.876450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.876457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.876463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.876478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.886401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.886480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.886498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.886505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.886512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.886526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.896418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.896496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.896511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.896518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.896525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.896539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.906462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.906533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.906552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.906560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.906566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.906581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.916472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.916539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.916555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.916563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.916569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.916585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.926496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.926568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.926586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.926594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.926601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.926617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.936525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.936605] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.936621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.014 [2024-07-15 23:53:18.936628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.014 [2024-07-15 23:53:18.936634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.014 [2024-07-15 23:53:18.936648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.014 qpair failed and we were unable to recover it. 00:27:30.014 [2024-07-15 23:53:18.946579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.014 [2024-07-15 23:53:18.946651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.014 [2024-07-15 23:53:18.946666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.015 [2024-07-15 23:53:18.946674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.015 [2024-07-15 23:53:18.946680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.015 [2024-07-15 23:53:18.946694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-07-15 23:53:18.956608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.015 [2024-07-15 23:53:18.956683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.015 [2024-07-15 23:53:18.956699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.015 [2024-07-15 23:53:18.956707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.015 [2024-07-15 23:53:18.956713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.015 [2024-07-15 23:53:18.956728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-07-15 23:53:18.966673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.015 [2024-07-15 23:53:18.966746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.015 [2024-07-15 23:53:18.966763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.015 [2024-07-15 23:53:18.966771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.015 [2024-07-15 23:53:18.966777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.015 [2024-07-15 23:53:18.966792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.015 [2024-07-15 23:53:18.976643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.015 [2024-07-15 23:53:18.976714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.015 [2024-07-15 23:53:18.976730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.015 [2024-07-15 23:53:18.976737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.015 [2024-07-15 23:53:18.976743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.015 [2024-07-15 23:53:18.976757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.015 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-15 23:53:18.986703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.274 [2024-07-15 23:53:18.986773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.274 [2024-07-15 23:53:18.986792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.274 [2024-07-15 23:53:18.986800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.274 [2024-07-15 23:53:18.986807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.274 [2024-07-15 23:53:18.986823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-15 23:53:18.996723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.274 [2024-07-15 23:53:18.996796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.274 [2024-07-15 23:53:18.996817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.274 [2024-07-15 23:53:18.996825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.274 [2024-07-15 23:53:18.996831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.274 [2024-07-15 23:53:18.996848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-15 23:53:19.006738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.274 [2024-07-15 23:53:19.006809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.274 [2024-07-15 23:53:19.006827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.274 [2024-07-15 23:53:19.006835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.274 [2024-07-15 23:53:19.006841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.274 [2024-07-15 23:53:19.006857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.274 qpair failed and we were unable to recover it. 00:27:30.274 [2024-07-15 23:53:19.016769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.274 [2024-07-15 23:53:19.016842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.274 [2024-07-15 23:53:19.016861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.016870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.016877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.016894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.026797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.026873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.026890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.026898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.026904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.026919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.036846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.036916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.036932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.036940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.036946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.036966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.046845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.046965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.046983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.046990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.046997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.047013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.056862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.056930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.056946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.056954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.056960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.056975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.066892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.066961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.066977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.066984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.066990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.067004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.076941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.077019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.077036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.077043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.077050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.077065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.087004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.087073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.087093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.087100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.087107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.087122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.096999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.097093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.097109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.097117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.097124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.097141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.107039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.107112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.107128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.107136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.107143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.107157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.117056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.117127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.117144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.117151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.117158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.117172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.127088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.127159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.127175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.127182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.127188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.127209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.137092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.137159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.137175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.137182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.137188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.137203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.147145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.147218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.147238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.147246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.147252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.275 [2024-07-15 23:53:19.147266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.275 qpair failed and we were unable to recover it. 00:27:30.275 [2024-07-15 23:53:19.157165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.275 [2024-07-15 23:53:19.157244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.275 [2024-07-15 23:53:19.157259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.275 [2024-07-15 23:53:19.157267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.275 [2024-07-15 23:53:19.157273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.157288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.167216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.167298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.167314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.167322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.167328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.167343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.177219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.177303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.177321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.177329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.177335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.177350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.187219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.187297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.187313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.187322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.187329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.187345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.197279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.197360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.197377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.197384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.197390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.197405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.207310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.207384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.207401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.207409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.207415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.207430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.217346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.217411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.217427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.217434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.217444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.217459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.227367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.227442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.227458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.227465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.227471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.227486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.276 [2024-07-15 23:53:19.237360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.276 [2024-07-15 23:53:19.237443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.276 [2024-07-15 23:53:19.237459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.276 [2024-07-15 23:53:19.237467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.276 [2024-07-15 23:53:19.237473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.276 [2024-07-15 23:53:19.237488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.276 qpair failed and we were unable to recover it. 00:27:30.536 [2024-07-15 23:53:19.247414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.536 [2024-07-15 23:53:19.247489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.536 [2024-07-15 23:53:19.247511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.536 [2024-07-15 23:53:19.247518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.536 [2024-07-15 23:53:19.247525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.536 [2024-07-15 23:53:19.247542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.536 qpair failed and we were unable to recover it. 00:27:30.536 [2024-07-15 23:53:19.257376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.536 [2024-07-15 23:53:19.257443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.536 [2024-07-15 23:53:19.257464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.536 [2024-07-15 23:53:19.257473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.536 [2024-07-15 23:53:19.257479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.536 [2024-07-15 23:53:19.257496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.536 qpair failed and we were unable to recover it. 00:27:30.536 [2024-07-15 23:53:19.267474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.536 [2024-07-15 23:53:19.267557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.536 [2024-07-15 23:53:19.267573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.536 [2024-07-15 23:53:19.267580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.536 [2024-07-15 23:53:19.267586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.536 [2024-07-15 23:53:19.267601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.536 qpair failed and we were unable to recover it. 00:27:30.536 [2024-07-15 23:53:19.277434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.536 [2024-07-15 23:53:19.277509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.536 [2024-07-15 23:53:19.277525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.277533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.277539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.277554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.287566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.287631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.287646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.287654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.287660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.287675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.297510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.297584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.297600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.297607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.297613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.297628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.307613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.307744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.307763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.307770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.307781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.307798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.317554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.317629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.317645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.317652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.317658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.317673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.327628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.327699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.327715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.327722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.327728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.327744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.337672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.337743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.337759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.337767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.337772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.337787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.347806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.347882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.347897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.347904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.347911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.347926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.357734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.357802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.357818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.357825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.357832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.357846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.367733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.367802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.367818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.367825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.367832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.367846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.377782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.377856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.377872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.377879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.377885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.377900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.387772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.387846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.387862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.387869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.387875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.387890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.397888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.397967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.397983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.397992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.398001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.398016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.407869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.407939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.407957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.407964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.407971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.537 [2024-07-15 23:53:19.407985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.537 qpair failed and we were unable to recover it. 00:27:30.537 [2024-07-15 23:53:19.417922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.537 [2024-07-15 23:53:19.417998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.537 [2024-07-15 23:53:19.418014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.537 [2024-07-15 23:53:19.418021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.537 [2024-07-15 23:53:19.418027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.418042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.427988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.428060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.428077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.428084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.428090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.428104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.437991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.438063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.438079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.438086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.438092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.438107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.448035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.448105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.448121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.448128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.448135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.448149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.458040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.458113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.458130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.458137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.458143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.458159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.468062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.468133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.468149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.468157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.468163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.468178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.478087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.478208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.478229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.478237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.478244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.478259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.488145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.488220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.488240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.488250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.488256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.488270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.498140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.498209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.498227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.498235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.498241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.498256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.538 [2024-07-15 23:53:19.508191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.538 [2024-07-15 23:53:19.508271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.538 [2024-07-15 23:53:19.508293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.538 [2024-07-15 23:53:19.508301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.538 [2024-07-15 23:53:19.508308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.538 [2024-07-15 23:53:19.508326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.538 qpair failed and we were unable to recover it. 00:27:30.798 [2024-07-15 23:53:19.518216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.798 [2024-07-15 23:53:19.518325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.798 [2024-07-15 23:53:19.518345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.798 [2024-07-15 23:53:19.518353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.798 [2024-07-15 23:53:19.518360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.798 [2024-07-15 23:53:19.518377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.798 qpair failed and we were unable to recover it. 00:27:30.798 [2024-07-15 23:53:19.528257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.798 [2024-07-15 23:53:19.528375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.798 [2024-07-15 23:53:19.528394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.798 [2024-07-15 23:53:19.528402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.798 [2024-07-15 23:53:19.528409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.798 [2024-07-15 23:53:19.528425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.798 qpair failed and we were unable to recover it. 00:27:30.798 [2024-07-15 23:53:19.538279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.798 [2024-07-15 23:53:19.538349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.798 [2024-07-15 23:53:19.538365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.798 [2024-07-15 23:53:19.538372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.798 [2024-07-15 23:53:19.538379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.798 [2024-07-15 23:53:19.538394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.798 qpair failed and we were unable to recover it. 00:27:30.798 [2024-07-15 23:53:19.548309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.798 [2024-07-15 23:53:19.548379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.798 [2024-07-15 23:53:19.548396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.798 [2024-07-15 23:53:19.548404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.798 [2024-07-15 23:53:19.548410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.798 [2024-07-15 23:53:19.548425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.798 qpair failed and we were unable to recover it. 00:27:30.798 [2024-07-15 23:53:19.558340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.798 [2024-07-15 23:53:19.558411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.798 [2024-07-15 23:53:19.558428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.798 [2024-07-15 23:53:19.558435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.798 [2024-07-15 23:53:19.558442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.798 [2024-07-15 23:53:19.558457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.798 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.568407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.568512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.568527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.568535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.568542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.568557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.578370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.578460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.578477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.578487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.578494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.578509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.588408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.588484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.588500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.588507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.588514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.588529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.598439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.598510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.598526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.598534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.598541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.598555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.608476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.608545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.608562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.608569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.608576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.608590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.618505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.618585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.618601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.618608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.618615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.618629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.628538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.628608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.628625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.628633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.628640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.628655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.638573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.638647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.638665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.638673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.638680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.638695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.648591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.648665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.648681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.648689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.648695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.648709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.658621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.658691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.658709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.658716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.658723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.658737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.668676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.668747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.668763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.668773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.668780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.668794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.678611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.678681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.678696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.678704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.678710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.678724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.688720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.688791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.688806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.688813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.688820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.688834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.698740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.799 [2024-07-15 23:53:19.698812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.799 [2024-07-15 23:53:19.698828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.799 [2024-07-15 23:53:19.698835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.799 [2024-07-15 23:53:19.698841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.799 [2024-07-15 23:53:19.698855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.799 qpair failed and we were unable to recover it. 00:27:30.799 [2024-07-15 23:53:19.708793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.800 [2024-07-15 23:53:19.708863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.800 [2024-07-15 23:53:19.708878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.800 [2024-07-15 23:53:19.708886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.800 [2024-07-15 23:53:19.708893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.800 [2024-07-15 23:53:19.708907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.800 qpair failed and we were unable to recover it. 00:27:30.800 [2024-07-15 23:53:19.718840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.800 [2024-07-15 23:53:19.718944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.800 [2024-07-15 23:53:19.718960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.800 [2024-07-15 23:53:19.718969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.800 [2024-07-15 23:53:19.718976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.800 [2024-07-15 23:53:19.718991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.800 qpair failed and we were unable to recover it. 00:27:30.800 [2024-07-15 23:53:19.728831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.800 [2024-07-15 23:53:19.728900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.800 [2024-07-15 23:53:19.728916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.800 [2024-07-15 23:53:19.728923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.800 [2024-07-15 23:53:19.728930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.800 [2024-07-15 23:53:19.728944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.800 qpair failed and we were unable to recover it. 00:27:30.800 [2024-07-15 23:53:19.738788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.800 [2024-07-15 23:53:19.738855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.800 [2024-07-15 23:53:19.738871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.800 [2024-07-15 23:53:19.738879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.800 [2024-07-15 23:53:19.738885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.800 [2024-07-15 23:53:19.738899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.800 qpair failed and we were unable to recover it. 00:27:30.800 [2024-07-15 23:53:19.748893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.800 [2024-07-15 23:53:19.748965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.800 [2024-07-15 23:53:19.748980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.800 [2024-07-15 23:53:19.748987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.800 [2024-07-15 23:53:19.748994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.800 [2024-07-15 23:53:19.749009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.800 qpair failed and we were unable to recover it. 00:27:30.800 [2024-07-15 23:53:19.758904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.800 [2024-07-15 23:53:19.758971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.800 [2024-07-15 23:53:19.758992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.800 [2024-07-15 23:53:19.758999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.800 [2024-07-15 23:53:19.759005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.800 [2024-07-15 23:53:19.759020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.800 qpair failed and we were unable to recover it. 00:27:30.800 [2024-07-15 23:53:19.768951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.800 [2024-07-15 23:53:19.769020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.800 [2024-07-15 23:53:19.769039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.800 [2024-07-15 23:53:19.769047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.800 [2024-07-15 23:53:19.769054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:30.800 [2024-07-15 23:53:19.769070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.800 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.779032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.779104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.779123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.779131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.779138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.779154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.789032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.789106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.789123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.789131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.789137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.789153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.799020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.799091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.799107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.799115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.799121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.799139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.809054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.809122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.809139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.809147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.809153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.809169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.819095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.819160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.819177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.819185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.819191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.819205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.829046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.829120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.829136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.829144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.829152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.829167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.839114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.839185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.839201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.839208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.839214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.839233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.849176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.849252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.061 [2024-07-15 23:53:19.849273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.061 [2024-07-15 23:53:19.849281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.061 [2024-07-15 23:53:19.849287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.061 [2024-07-15 23:53:19.849302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.061 qpair failed and we were unable to recover it. 00:27:31.061 [2024-07-15 23:53:19.859189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.061 [2024-07-15 23:53:19.859264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.859281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.859289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.859295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.859310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.869154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.869231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.869247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.869254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.869261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.869276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.879249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.879320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.879336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.879344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.879349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.879364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.889263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.889352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.889370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.889377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.889383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.889404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.899273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.899344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.899361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.899370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.899380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.899400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.909276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.909369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.909386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.909394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.909402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.909417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.919359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.919430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.919446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.919454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.919460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.919475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.929395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.929464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.929480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.929487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.929493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.929508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.939406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.939475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.939494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.939502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.939508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.939522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.949454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.949525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.949541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.949549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.949555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.949569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.959468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.959542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.959558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.959566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.959572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.959586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.969498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.969570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.969585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.969593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.969599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.969614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.979449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.979531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.979546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.979554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.979561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.979622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.989565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.989631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.989646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.989654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.062 [2024-07-15 23:53:19.989660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.062 [2024-07-15 23:53:19.989674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.062 qpair failed and we were unable to recover it. 00:27:31.062 [2024-07-15 23:53:19.999620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.062 [2024-07-15 23:53:19.999693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.062 [2024-07-15 23:53:19.999709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.062 [2024-07-15 23:53:19.999716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.063 [2024-07-15 23:53:19.999722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.063 [2024-07-15 23:53:19.999736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.063 qpair failed and we were unable to recover it. 00:27:31.063 [2024-07-15 23:53:20.009619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.063 [2024-07-15 23:53:20.009701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.063 [2024-07-15 23:53:20.009717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.063 [2024-07-15 23:53:20.009725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.063 [2024-07-15 23:53:20.009731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.063 [2024-07-15 23:53:20.009746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.063 qpair failed and we were unable to recover it. 00:27:31.063 [2024-07-15 23:53:20.019647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.063 [2024-07-15 23:53:20.019727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.063 [2024-07-15 23:53:20.019747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.063 [2024-07-15 23:53:20.019755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.063 [2024-07-15 23:53:20.019764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.063 [2024-07-15 23:53:20.019782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.063 qpair failed and we were unable to recover it. 00:27:31.063 [2024-07-15 23:53:20.029720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.063 [2024-07-15 23:53:20.029802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.063 [2024-07-15 23:53:20.029826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.063 [2024-07-15 23:53:20.029835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.063 [2024-07-15 23:53:20.029842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.063 [2024-07-15 23:53:20.029859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.063 qpair failed and we were unable to recover it. 00:27:31.323 [2024-07-15 23:53:20.039735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.323 [2024-07-15 23:53:20.039810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.323 [2024-07-15 23:53:20.039828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.323 [2024-07-15 23:53:20.039836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.323 [2024-07-15 23:53:20.039843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.323 [2024-07-15 23:53:20.039858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-07-15 23:53:20.049757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.323 [2024-07-15 23:53:20.049831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.323 [2024-07-15 23:53:20.049848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.323 [2024-07-15 23:53:20.049856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.323 [2024-07-15 23:53:20.049863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.323 [2024-07-15 23:53:20.049878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-07-15 23:53:20.059687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.323 [2024-07-15 23:53:20.059755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.323 [2024-07-15 23:53:20.059774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.323 [2024-07-15 23:53:20.059782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.323 [2024-07-15 23:53:20.059790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.323 [2024-07-15 23:53:20.059806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-07-15 23:53:20.069826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.323 [2024-07-15 23:53:20.069930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.323 [2024-07-15 23:53:20.069945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.323 [2024-07-15 23:53:20.069953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.323 [2024-07-15 23:53:20.069964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.323 [2024-07-15 23:53:20.069980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-07-15 23:53:20.079777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.323 [2024-07-15 23:53:20.079850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.323 [2024-07-15 23:53:20.079867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.323 [2024-07-15 23:53:20.079875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.323 [2024-07-15 23:53:20.079881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.323 [2024-07-15 23:53:20.079896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.323 qpair failed and we were unable to recover it. 00:27:31.323 [2024-07-15 23:53:20.089896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.089964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.089980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.089987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.089993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.090008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.099928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.100034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.100052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.100059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.100065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.100080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.109923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.109996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.110012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.110019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.110026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.110040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.119899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.119972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.119988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.119996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.120002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.120017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.129981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.130053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.130069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.130076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.130082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.130097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.140009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.140080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.140097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.140105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.140112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.140126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.150050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.150122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.150138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.150146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.150152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.150166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.160056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.160139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.160155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.160163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.160172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.160186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.170163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.170241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.170258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.170265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.170271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.170286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.180159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.180227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.180244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.180251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.180257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.180272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.190198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.190275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.190291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.190298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.190305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.190319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.200213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.200283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.200299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.200307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.200313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.200329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.210180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.210263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.210279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.210286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.210292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.210307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.220152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.220221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.220242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.220249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.324 [2024-07-15 23:53:20.220255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.324 [2024-07-15 23:53:20.220270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.324 qpair failed and we were unable to recover it. 00:27:31.324 [2024-07-15 23:53:20.230270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.324 [2024-07-15 23:53:20.230343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.324 [2024-07-15 23:53:20.230359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.324 [2024-07-15 23:53:20.230366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.325 [2024-07-15 23:53:20.230372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.325 [2024-07-15 23:53:20.230387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-07-15 23:53:20.240283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.325 [2024-07-15 23:53:20.240415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.325 [2024-07-15 23:53:20.240433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.325 [2024-07-15 23:53:20.240440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.325 [2024-07-15 23:53:20.240447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.325 [2024-07-15 23:53:20.240463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-07-15 23:53:20.250320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.325 [2024-07-15 23:53:20.250392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.325 [2024-07-15 23:53:20.250407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.325 [2024-07-15 23:53:20.250419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.325 [2024-07-15 23:53:20.250425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.325 [2024-07-15 23:53:20.250440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-07-15 23:53:20.260343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.325 [2024-07-15 23:53:20.260434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.325 [2024-07-15 23:53:20.260451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.325 [2024-07-15 23:53:20.260458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.325 [2024-07-15 23:53:20.260464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.325 [2024-07-15 23:53:20.260479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-07-15 23:53:20.270396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.325 [2024-07-15 23:53:20.270483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.325 [2024-07-15 23:53:20.270498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.325 [2024-07-15 23:53:20.270506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.325 [2024-07-15 23:53:20.270512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.325 [2024-07-15 23:53:20.270526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-07-15 23:53:20.280412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.325 [2024-07-15 23:53:20.280484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.325 [2024-07-15 23:53:20.280499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.325 [2024-07-15 23:53:20.280507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.325 [2024-07-15 23:53:20.280514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.325 [2024-07-15 23:53:20.280527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.325 [2024-07-15 23:53:20.290476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.325 [2024-07-15 23:53:20.290580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.325 [2024-07-15 23:53:20.290595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.325 [2024-07-15 23:53:20.290603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.325 [2024-07-15 23:53:20.290609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.325 [2024-07-15 23:53:20.290624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.325 qpair failed and we were unable to recover it. 00:27:31.584 [2024-07-15 23:53:20.300476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.584 [2024-07-15 23:53:20.300547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.584 [2024-07-15 23:53:20.300563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.584 [2024-07-15 23:53:20.300570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.584 [2024-07-15 23:53:20.300576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.584 [2024-07-15 23:53:20.300591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.584 qpair failed and we were unable to recover it. 00:27:31.584 [2024-07-15 23:53:20.310445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.584 [2024-07-15 23:53:20.310514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.584 [2024-07-15 23:53:20.310530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.584 [2024-07-15 23:53:20.310537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.584 [2024-07-15 23:53:20.310543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.584 [2024-07-15 23:53:20.310558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.584 qpair failed and we were unable to recover it. 00:27:31.584 [2024-07-15 23:53:20.320546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.584 [2024-07-15 23:53:20.320616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.584 [2024-07-15 23:53:20.320632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.584 [2024-07-15 23:53:20.320639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.584 [2024-07-15 23:53:20.320645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.584 [2024-07-15 23:53:20.320660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.584 qpair failed and we were unable to recover it. 00:27:31.584 [2024-07-15 23:53:20.330490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.584 [2024-07-15 23:53:20.330558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.584 [2024-07-15 23:53:20.330574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.584 [2024-07-15 23:53:20.330581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.584 [2024-07-15 23:53:20.330587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.584 [2024-07-15 23:53:20.330602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.584 qpair failed and we were unable to recover it. 00:27:31.584 [2024-07-15 23:53:20.340593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.584 [2024-07-15 23:53:20.340659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.584 [2024-07-15 23:53:20.340675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.584 [2024-07-15 23:53:20.340685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.340692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.340706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.350637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.350711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.350727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.350735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.350741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.350756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.360684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.360784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.360800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.360808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.360816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.360831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.370683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.370754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.370769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.370777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.370783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.370797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.380670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.380750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.380765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.380773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.380779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.380794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.390751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.390821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.390837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.390844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.390850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.390864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.400752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.400817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.400833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.400840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.400847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.400862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.410793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.410865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.410881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.410889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.410895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.410910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.420850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.420924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.420940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.420948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.420954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.420968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.430867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.430971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.430987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.430998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.431005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.431020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.440882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.440955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.440971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.440979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.440985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.440999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.450910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.450982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.450999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.451007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.451013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.451028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.460965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.461035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.461051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.461058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.461065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.461079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.470978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.471051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.471066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.471074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.471080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.471094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.480999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.481071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.481087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.481095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.481101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.481115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.491015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.491091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.491107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.491115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.491121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.491135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.585 [2024-07-15 23:53:20.501079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.585 [2024-07-15 23:53:20.501152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.585 [2024-07-15 23:53:20.501168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.585 [2024-07-15 23:53:20.501177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.585 [2024-07-15 23:53:20.501183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.585 [2024-07-15 23:53:20.501198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.585 qpair failed and we were unable to recover it. 00:27:31.586 [2024-07-15 23:53:20.511076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.586 [2024-07-15 23:53:20.511146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.586 [2024-07-15 23:53:20.511162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.586 [2024-07-15 23:53:20.511169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.586 [2024-07-15 23:53:20.511175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.586 [2024-07-15 23:53:20.511190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.586 qpair failed and we were unable to recover it. 00:27:31.586 [2024-07-15 23:53:20.521108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.586 [2024-07-15 23:53:20.521183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.586 [2024-07-15 23:53:20.521201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.586 [2024-07-15 23:53:20.521209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.586 [2024-07-15 23:53:20.521215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.586 [2024-07-15 23:53:20.521234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.586 qpair failed and we were unable to recover it. 00:27:31.586 [2024-07-15 23:53:20.531169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.586 [2024-07-15 23:53:20.531247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.586 [2024-07-15 23:53:20.531263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.586 [2024-07-15 23:53:20.531270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.586 [2024-07-15 23:53:20.531276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.586 [2024-07-15 23:53:20.531291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.586 qpair failed and we were unable to recover it. 00:27:31.586 [2024-07-15 23:53:20.541099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.586 [2024-07-15 23:53:20.541173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.586 [2024-07-15 23:53:20.541189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.586 [2024-07-15 23:53:20.541196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.586 [2024-07-15 23:53:20.541203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.586 [2024-07-15 23:53:20.541218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.586 qpair failed and we were unable to recover it. 00:27:31.586 [2024-07-15 23:53:20.551136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.586 [2024-07-15 23:53:20.551229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.586 [2024-07-15 23:53:20.551245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.586 [2024-07-15 23:53:20.551252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.586 [2024-07-15 23:53:20.551258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.586 [2024-07-15 23:53:20.551273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.586 qpair failed and we were unable to recover it. 00:27:31.845 [2024-07-15 23:53:20.561240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.845 [2024-07-15 23:53:20.561321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.845 [2024-07-15 23:53:20.561337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.845 [2024-07-15 23:53:20.561344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.845 [2024-07-15 23:53:20.561351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.845 [2024-07-15 23:53:20.561368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.845 qpair failed and we were unable to recover it. 00:27:31.845 [2024-07-15 23:53:20.571193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.571267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.571283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.571290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.571296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.571311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.581319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.581397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.581413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.581421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.581427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.581441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.591399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.591475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.591491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.591499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.591505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.591520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.601382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.601454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.601472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.601480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.601486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.601501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.611310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.611382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.611402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.611409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.611415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.611431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.621342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.621407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.621423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.621430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.621436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.621451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.631408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.631481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.631496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.631503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.631509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.631523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.641409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.641483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.641499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.641507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.641513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.641527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.651497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.651569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.651585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.651592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.651598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.651617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.661518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.661588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.661604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.661611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.661617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.661631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.671526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.671609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.671625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.671632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.671638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.671653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.681511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.681576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.681592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.681599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.681605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.681620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.691554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.846 [2024-07-15 23:53:20.691623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.846 [2024-07-15 23:53:20.691640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.846 [2024-07-15 23:53:20.691647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.846 [2024-07-15 23:53:20.691653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.846 [2024-07-15 23:53:20.691668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.846 qpair failed and we were unable to recover it. 00:27:31.846 [2024-07-15 23:53:20.701684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.701756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.701775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.701783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.701789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.701804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.711605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.711676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.711692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.711699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.711705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.711720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.721668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.721741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.721758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.721765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.721771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.721785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.731784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.731852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.731867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.731874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.731880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.731895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.741774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.741845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.741861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.741867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.741874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.741892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.751818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.751890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.751907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.751915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.751921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.751936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.761755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.761829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.761846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.761853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.761859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.761874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.771931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.772011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.772027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.772034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.772040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.772055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.781921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.781989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.782005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.782012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.782018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.782033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.791894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.791968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.791987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.791994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.792000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.792015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.801983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.802057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.802074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.802082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.802089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.802103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:31.847 [2024-07-15 23:53:20.811970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.847 [2024-07-15 23:53:20.812053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.847 [2024-07-15 23:53:20.812069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.847 [2024-07-15 23:53:20.812077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.847 [2024-07-15 23:53:20.812083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:31.847 [2024-07-15 23:53:20.812098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.847 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.822003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.822075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.822092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.822099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.822105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.822120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.832042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.832113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.832129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.832137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.832147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.832161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.841984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.842056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.842072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.842079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.842085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.842100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.852081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.852154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.852170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.852178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.852184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.852199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.862062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.862129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.862145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.862152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.862159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.862173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.872129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.872200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.872216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.872223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.872234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.872249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.882178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.882265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.882280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.882288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.882294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.882308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.892199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.892276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.892292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.892299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.892306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.892320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.107 [2024-07-15 23:53:20.902267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.107 [2024-07-15 23:53:20.902350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.107 [2024-07-15 23:53:20.902370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.107 [2024-07-15 23:53:20.902377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.107 [2024-07-15 23:53:20.902384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.107 [2024-07-15 23:53:20.902401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.107 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.912275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.912350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.912368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.912376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.912383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.912398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.922286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.922362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.922380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.922387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.922398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.922414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.932323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.932428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.932444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.932452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.932459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.932475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.942348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.942419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.942435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.942443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.942450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.942465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.952379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.952452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.952468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.952475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.952482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.952497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.962405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.962477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.962494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.962501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.962508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.962523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.972440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.972520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.972538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.972545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.972552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.972567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.982495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.982563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.982579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.982586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.982592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.982607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:20.992511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:20.992582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:20.992600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:20.992608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:20.992615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:20.992630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.002512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.002592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:21.002610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:21.002618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:21.002625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:21.002640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.012597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.012712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:21.012730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:21.012742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:21.012748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:21.012765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.022574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.022729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:21.022747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:21.022754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:21.022761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:21.022776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.032574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.032642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:21.032658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:21.032665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:21.032671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:21.032686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.042595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.042667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:21.042683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:21.042690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:21.042696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:21.042710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.052617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.052688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:21.052704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:21.052712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:21.052718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21d1ed0 00:27:32.108 [2024-07-15 23:53:21.052732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.062665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.062752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.108 [2024-07-15 23:53:21.062780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.108 [2024-07-15 23:53:21.062792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.108 [2024-07-15 23:53:21.062802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.108 [2024-07-15 23:53:21.062827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.108 qpair failed and we were unable to recover it. 00:27:32.108 [2024-07-15 23:53:21.072732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.108 [2024-07-15 23:53:21.072803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.109 [2024-07-15 23:53:21.072820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.109 [2024-07-15 23:53:21.072828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.109 [2024-07-15 23:53:21.072835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.109 [2024-07-15 23:53:21.072852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.109 qpair failed and we were unable to recover it. 00:27:32.368 [2024-07-15 23:53:21.082757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.082829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.082846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.082854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.082860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.082875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.092737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.092852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.092871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.092879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.092886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.092903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.102801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.102921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.102940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.102950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.102957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.102973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.112843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.112918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.112935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.112943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.112949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.112965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.122862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.122935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.122951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.122958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.122964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.122979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.132903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.132977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.132993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.133001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.133007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.133022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.142931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.143008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.143024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.143032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.143038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.143053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.152966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.153040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.153057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.153065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.153071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.153087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.162987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.163064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.163082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.163089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.163096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.163113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.173011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.173080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.173097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.173105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.173111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.173127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.183063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.183178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.183195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.183202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.183209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.183229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.193085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.193158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.193176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.193184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.193190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.193206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.203088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.203160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.203176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.203183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.203189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.203204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.213124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.213196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.213213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.213220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.213230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.213246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.223170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.223247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.223264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.223271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.223277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.223293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.233183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.233264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.233282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.233292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.233302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.233323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.243203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.243282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.243298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.243306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.243312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.243327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.253166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.253239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.369 [2024-07-15 23:53:21.253255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.369 [2024-07-15 23:53:21.253263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.369 [2024-07-15 23:53:21.253269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.369 [2024-07-15 23:53:21.253284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.369 qpair failed and we were unable to recover it. 00:27:32.369 [2024-07-15 23:53:21.263265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.369 [2024-07-15 23:53:21.263339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.263355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.263363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.263369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.263384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.370 [2024-07-15 23:53:21.273325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.370 [2024-07-15 23:53:21.273396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.273412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.273420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.273426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.273442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.370 [2024-07-15 23:53:21.283312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.370 [2024-07-15 23:53:21.283379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.283398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.283405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.283411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.283427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.370 [2024-07-15 23:53:21.293270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.370 [2024-07-15 23:53:21.293343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.293359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.293366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.293373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.293388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.370 [2024-07-15 23:53:21.303370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.370 [2024-07-15 23:53:21.303443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.303459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.303467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.303473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.303489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.370 [2024-07-15 23:53:21.313410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.370 [2024-07-15 23:53:21.313486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.313501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.313509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.313515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.313530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.370 [2024-07-15 23:53:21.323428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.370 [2024-07-15 23:53:21.323501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.323517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.323524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.323531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.323550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.370 [2024-07-15 23:53:21.333455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.370 [2024-07-15 23:53:21.333533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.370 [2024-07-15 23:53:21.333549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.370 [2024-07-15 23:53:21.333557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.370 [2024-07-15 23:53:21.333563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.370 [2024-07-15 23:53:21.333578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.370 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.343487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.343551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.343567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.343574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.343580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.343596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.353470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.353540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.353556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.353563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.353570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.353585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.363520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.363602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.363617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.363625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.363631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.363645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.373562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.373635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.373651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.373658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.373665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.373680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.383603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.383675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.383691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.383698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.383705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.383720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.393579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.393651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.393667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.393675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.393681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.393697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.403645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.403717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.403733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.403741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.403748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.403764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.413656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.413719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.413735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.413741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.413751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.413765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.423701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.423768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.423784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.423791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.423797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.423812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.630 qpair failed and we were unable to recover it. 00:27:32.630 [2024-07-15 23:53:21.433736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.630 [2024-07-15 23:53:21.433804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.630 [2024-07-15 23:53:21.433821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.630 [2024-07-15 23:53:21.433828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.630 [2024-07-15 23:53:21.433835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.630 [2024-07-15 23:53:21.433850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.443710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.443777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.443793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.443800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.443806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.443821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.453702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.453770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.453786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.453793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.453799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.453814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.463821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.463888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.463904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.463911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.463917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.463932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.473854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.473927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.473943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.473951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.473957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.473972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.483874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.483949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.483964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.483971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.483978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.483993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.493947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.494027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.494043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.494050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.494056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.494070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.503941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.504050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.504068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.504080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.504087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.504102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.513964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.514039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.514054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.514061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.514067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.514082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.523990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.524063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.524079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.524086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.524092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.524108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.534027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.534099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.534115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.534122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.534129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.534143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.544056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.544137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.544153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.544160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.544167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.544182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.554119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.554218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.554237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.554245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.554251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.554267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.564109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.564182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.564197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.564204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.564210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.564231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.631 [2024-07-15 23:53:21.574141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.631 [2024-07-15 23:53:21.574216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.631 [2024-07-15 23:53:21.574237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.631 [2024-07-15 23:53:21.574245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.631 [2024-07-15 23:53:21.574252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.631 [2024-07-15 23:53:21.574268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.631 qpair failed and we were unable to recover it. 00:27:32.632 [2024-07-15 23:53:21.584161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.632 [2024-07-15 23:53:21.584233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.632 [2024-07-15 23:53:21.584248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.632 [2024-07-15 23:53:21.584256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.632 [2024-07-15 23:53:21.584262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.632 [2024-07-15 23:53:21.584277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.632 qpair failed and we were unable to recover it. 00:27:32.632 [2024-07-15 23:53:21.594212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.632 [2024-07-15 23:53:21.594289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.632 [2024-07-15 23:53:21.594308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.632 [2024-07-15 23:53:21.594315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.632 [2024-07-15 23:53:21.594321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.632 [2024-07-15 23:53:21.594336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.632 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.604223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.604300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.604317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.604325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.604332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.604348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.614247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.614315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.614330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.614338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.614344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.614359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.624281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.624354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.624370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.624377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.624383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.624398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.634336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.634407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.634422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.634430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.634436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.634451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.644345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.644418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.644434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.644442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.644448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.644463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.654359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.654427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.654443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.654450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.654456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.654471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.664404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.664518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.664535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.664542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.664549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.664564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.674462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.674537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.674553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.674560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.674566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.674581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.684420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.684489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.684508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.684515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.684521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.684536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.694483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.694557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.694574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.694581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.694588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.694604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.704510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.704582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.704598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.704606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.704612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.704628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.714545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.714616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.714631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.714639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.714645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.714660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.724564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.893 [2024-07-15 23:53:21.724637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.893 [2024-07-15 23:53:21.724653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.893 [2024-07-15 23:53:21.724660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.893 [2024-07-15 23:53:21.724666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.893 [2024-07-15 23:53:21.724685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.893 qpair failed and we were unable to recover it. 00:27:32.893 [2024-07-15 23:53:21.734619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.734692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.734707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.734714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.734720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.734735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.744653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.744724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.744741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.744748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.744755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.744771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.754581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.754650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.754666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.754674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.754680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.754696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.764677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.764755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.764770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.764779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.764785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.764800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.774697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.774768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.774787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.774795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.774801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.774816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.784748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.784821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.784837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.784844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.784851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.784866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.794769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.794841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.794856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.794863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.794870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.794885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.804791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.804859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.804875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.804882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.804888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.804903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.814737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.814804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.814819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.814827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.814835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.814851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.824867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.824941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.824958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.824965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.824972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.824988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.834874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.834948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.834965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.834972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.834979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.834994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.844835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.844942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.844957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.844965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.844972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.844987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:32.894 [2024-07-15 23:53:21.854948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.894 [2024-07-15 23:53:21.855017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.894 [2024-07-15 23:53:21.855032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.894 [2024-07-15 23:53:21.855040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.894 [2024-07-15 23:53:21.855046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:32.894 [2024-07-15 23:53:21.855061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.894 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.864992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.865064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.865080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.865087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.865093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.865109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.875030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.875103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.875120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.875128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.875134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.875149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.885016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.885088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.885105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.885113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.885119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.885135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.895053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.895175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.895192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.895199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.895205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.895221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.905107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.905174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.905190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.905247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.905254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.905270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.915144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.915249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.915264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.915271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.915277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.915293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.925064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.925143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.925161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.925168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.925174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.925190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.935144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.935210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.935230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.935238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.935244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.935259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.945178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.945252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.945268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.945276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.945282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.945297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.955210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.955289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.955305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.955313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.955319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.955335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.155 [2024-07-15 23:53:21.965231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.155 [2024-07-15 23:53:21.965307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.155 [2024-07-15 23:53:21.965323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.155 [2024-07-15 23:53:21.965331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.155 [2024-07-15 23:53:21.965338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.155 [2024-07-15 23:53:21.965353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.155 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:21.975279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:21.975358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:21.975376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:21.975383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:21.975390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:21.975405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:21.985282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:21.985356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:21.985372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:21.985379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:21.985386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:21.985401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:21.995338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:21.995415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:21.995431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:21.995441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:21.995447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:21.995463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.005345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.005421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.005439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.005447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.005453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.005470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.015323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.015398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.015416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.015424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.015431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.015447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.025349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.025421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.025440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.025447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.025455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.025471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.035412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.035488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.035504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.035511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.035518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.035533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.045498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.045578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.045596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.045603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.045609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.045625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.055496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.055570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.055587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.055595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.055602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.055618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.065522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.065602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.065619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.065626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.065633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.065651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.075566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.075644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.075661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.075669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.075676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.075693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.085577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.085644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.085665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.085673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.085679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.085694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.095563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.095658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.095673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.095680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.095687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.095702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.105630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.105697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.156 [2024-07-15 23:53:22.105712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.156 [2024-07-15 23:53:22.105719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.156 [2024-07-15 23:53:22.105725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.156 [2024-07-15 23:53:22.105741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.156 qpair failed and we were unable to recover it. 00:27:33.156 [2024-07-15 23:53:22.115588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.156 [2024-07-15 23:53:22.115658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.157 [2024-07-15 23:53:22.115674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.157 [2024-07-15 23:53:22.115681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.157 [2024-07-15 23:53:22.115687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.157 [2024-07-15 23:53:22.115703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.157 [2024-07-15 23:53:22.125674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.157 [2024-07-15 23:53:22.125750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.157 [2024-07-15 23:53:22.125767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.157 [2024-07-15 23:53:22.125774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.157 [2024-07-15 23:53:22.125781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.157 [2024-07-15 23:53:22.125800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.157 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.135713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.135784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.135801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.135808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.135814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.135830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.145723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.145793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.145808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.145815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.145821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.145837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.155818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.155889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.155905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.155912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.155918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.155933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.165798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.165870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.165887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.165894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.165900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.165915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.175804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.175877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.175897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.175904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.175910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.175926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.185849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.185915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.185931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.185939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.185945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.185960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.195890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.195966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.195982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.195990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.195996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.196011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.205907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.205982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.205998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.206005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.206011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.206027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.215918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.215992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.216008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.216015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.216024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.216040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.225989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.226056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.226072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.226079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.226085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.226101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.235949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.236021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.236037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.417 [2024-07-15 23:53:22.236044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.417 [2024-07-15 23:53:22.236050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.417 [2024-07-15 23:53:22.236066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.417 qpair failed and we were unable to recover it. 00:27:33.417 [2024-07-15 23:53:22.246046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.417 [2024-07-15 23:53:22.246120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.417 [2024-07-15 23:53:22.246136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.246144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.246151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.246166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.256047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.256122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.256138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.256146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.256152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.256167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.266089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.266172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.266188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.266196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.266202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.266218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.276086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.276158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.276174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.276182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.276189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.276204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.286191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.286280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.286296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.286303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.286310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.286326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.296153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.296219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.296240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.296248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.296254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.296269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.306178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.306257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.306273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.306284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.306290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.306306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.316173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.316247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.316263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.316271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.316278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.316293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.326259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.326333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.326349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.326356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.326363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.326378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.336311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.336418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.336434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.336441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.336447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.336463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.346279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.346348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.346364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.346371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.346377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.346392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.356389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.356487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.356502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.356510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.356516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.356532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.366428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.366493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.366509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.366517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.366524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.366539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.376438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.376508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.376524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.376532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.376538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.418 [2024-07-15 23:53:22.376553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.418 qpair failed and we were unable to recover it. 00:27:33.418 [2024-07-15 23:53:22.386458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.418 [2024-07-15 23:53:22.386523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.418 [2024-07-15 23:53:22.386539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.418 [2024-07-15 23:53:22.386546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.418 [2024-07-15 23:53:22.386553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.419 [2024-07-15 23:53:22.386568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.419 qpair failed and we were unable to recover it. 00:27:33.679 [2024-07-15 23:53:22.396509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.679 [2024-07-15 23:53:22.396591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.679 [2024-07-15 23:53:22.396606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.679 [2024-07-15 23:53:22.396617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.679 [2024-07-15 23:53:22.396623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.679 [2024-07-15 23:53:22.396638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.679 qpair failed and we were unable to recover it. 00:27:33.679 [2024-07-15 23:53:22.406514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.679 [2024-07-15 23:53:22.406588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.679 [2024-07-15 23:53:22.406604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.679 [2024-07-15 23:53:22.406612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.679 [2024-07-15 23:53:22.406618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.679 [2024-07-15 23:53:22.406633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.679 qpair failed and we were unable to recover it. 00:27:33.679 [2024-07-15 23:53:22.416467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.679 [2024-07-15 23:53:22.416535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.679 [2024-07-15 23:53:22.416553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.679 [2024-07-15 23:53:22.416560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.679 [2024-07-15 23:53:22.416567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.679 [2024-07-15 23:53:22.416582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.679 qpair failed and we were unable to recover it. 00:27:33.679 [2024-07-15 23:53:22.426538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.679 [2024-07-15 23:53:22.426617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.679 [2024-07-15 23:53:22.426632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.679 [2024-07-15 23:53:22.426639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.679 [2024-07-15 23:53:22.426645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.679 [2024-07-15 23:53:22.426660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.679 qpair failed and we were unable to recover it. 00:27:33.679 [2024-07-15 23:53:22.436623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.679 [2024-07-15 23:53:22.436693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.679 [2024-07-15 23:53:22.436710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.679 [2024-07-15 23:53:22.436717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.436724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.436740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.446648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.446719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.446735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.446742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.446748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.446763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.456673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.456742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.456758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.456765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.456772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.456787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.466699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.466769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.466785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.466792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.466798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.466813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.476730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.476805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.476821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.476828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.476834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.476849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.486702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.486784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.486802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.486809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.486815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.486830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.496789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.496861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.496877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.496884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.496891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.496906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.506834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.506903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.506919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.506926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.506933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.506947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.516859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.516932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.516947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.516955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.516962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.516977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.526869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.526942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.526957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.526965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.526971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.526989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.536898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.536972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.536988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.536995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.537002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.537016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.546934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.546998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.547013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.547020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.547026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.547041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.556981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.557053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.557068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.557075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.557081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.557096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.566917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.566999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.567015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.567022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.567028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.567043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.680 qpair failed and we were unable to recover it. 00:27:33.680 [2024-07-15 23:53:22.577008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.680 [2024-07-15 23:53:22.577078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.680 [2024-07-15 23:53:22.577097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.680 [2024-07-15 23:53:22.577105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.680 [2024-07-15 23:53:22.577112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.680 [2024-07-15 23:53:22.577127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.681 [2024-07-15 23:53:22.587041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.681 [2024-07-15 23:53:22.587108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.681 [2024-07-15 23:53:22.587124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.681 [2024-07-15 23:53:22.587131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.681 [2024-07-15 23:53:22.587138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.681 [2024-07-15 23:53:22.587153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.681 [2024-07-15 23:53:22.597112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.681 [2024-07-15 23:53:22.597188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.681 [2024-07-15 23:53:22.597204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.681 [2024-07-15 23:53:22.597211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.681 [2024-07-15 23:53:22.597218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.681 [2024-07-15 23:53:22.597239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.681 [2024-07-15 23:53:22.607106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.681 [2024-07-15 23:53:22.607206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.681 [2024-07-15 23:53:22.607222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.681 [2024-07-15 23:53:22.607233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.681 [2024-07-15 23:53:22.607240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.681 [2024-07-15 23:53:22.607255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.681 [2024-07-15 23:53:22.617151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.681 [2024-07-15 23:53:22.617229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.681 [2024-07-15 23:53:22.617246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.681 [2024-07-15 23:53:22.617253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.681 [2024-07-15 23:53:22.617263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.681 [2024-07-15 23:53:22.617278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.681 [2024-07-15 23:53:22.627182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.681 [2024-07-15 23:53:22.627258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.681 [2024-07-15 23:53:22.627273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.681 [2024-07-15 23:53:22.627280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.681 [2024-07-15 23:53:22.627286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.681 [2024-07-15 23:53:22.627301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.681 [2024-07-15 23:53:22.637218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.681 [2024-07-15 23:53:22.637292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.681 [2024-07-15 23:53:22.637307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.681 [2024-07-15 23:53:22.637314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.681 [2024-07-15 23:53:22.637321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.681 [2024-07-15 23:53:22.637336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.681 [2024-07-15 23:53:22.647282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.681 [2024-07-15 23:53:22.647357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.681 [2024-07-15 23:53:22.647372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.681 [2024-07-15 23:53:22.647379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.681 [2024-07-15 23:53:22.647385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.681 [2024-07-15 23:53:22.647400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.681 qpair failed and we were unable to recover it. 00:27:33.940 [2024-07-15 23:53:22.657213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.940 [2024-07-15 23:53:22.657294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.940 [2024-07-15 23:53:22.657310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.940 [2024-07-15 23:53:22.657318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.940 [2024-07-15 23:53:22.657324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.940 [2024-07-15 23:53:22.657339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.940 qpair failed and we were unable to recover it. 00:27:33.940 [2024-07-15 23:53:22.667283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.940 [2024-07-15 23:53:22.667358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.940 [2024-07-15 23:53:22.667374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.940 [2024-07-15 23:53:22.667381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.940 [2024-07-15 23:53:22.667387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.940 [2024-07-15 23:53:22.667403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.940 qpair failed and we were unable to recover it. 00:27:33.940 [2024-07-15 23:53:22.677359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.940 [2024-07-15 23:53:22.677429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.940 [2024-07-15 23:53:22.677445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.940 [2024-07-15 23:53:22.677453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.940 [2024-07-15 23:53:22.677459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.940 [2024-07-15 23:53:22.677474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.940 qpair failed and we were unable to recover it. 00:27:33.940 [2024-07-15 23:53:22.687343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.940 [2024-07-15 23:53:22.687412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.940 [2024-07-15 23:53:22.687427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.940 [2024-07-15 23:53:22.687434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.940 [2024-07-15 23:53:22.687440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.940 [2024-07-15 23:53:22.687455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.940 qpair failed and we were unable to recover it. 00:27:33.940 [2024-07-15 23:53:22.697430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.940 [2024-07-15 23:53:22.697507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.940 [2024-07-15 23:53:22.697522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.940 [2024-07-15 23:53:22.697530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.940 [2024-07-15 23:53:22.697536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.940 [2024-07-15 23:53:22.697551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.940 qpair failed and we were unable to recover it. 00:27:33.940 [2024-07-15 23:53:22.707408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.940 [2024-07-15 23:53:22.707480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.940 [2024-07-15 23:53:22.707496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.940 [2024-07-15 23:53:22.707504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.941 [2024-07-15 23:53:22.707513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.941 [2024-07-15 23:53:22.707528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.941 qpair failed and we were unable to recover it. 00:27:33.941 [2024-07-15 23:53:22.717405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.941 [2024-07-15 23:53:22.717483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.941 [2024-07-15 23:53:22.717499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.941 [2024-07-15 23:53:22.717506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.941 [2024-07-15 23:53:22.717512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.941 [2024-07-15 23:53:22.717528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.941 qpair failed and we were unable to recover it. 00:27:33.941 [2024-07-15 23:53:22.727469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.941 [2024-07-15 23:53:22.727541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.941 [2024-07-15 23:53:22.727557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.941 [2024-07-15 23:53:22.727564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.941 [2024-07-15 23:53:22.727570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.941 [2024-07-15 23:53:22.727586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.941 qpair failed and we were unable to recover it. 00:27:33.941 [2024-07-15 23:53:22.737510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.941 [2024-07-15 23:53:22.737585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.941 [2024-07-15 23:53:22.737600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.941 [2024-07-15 23:53:22.737607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.941 [2024-07-15 23:53:22.737613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.941 [2024-07-15 23:53:22.737629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.941 qpair failed and we were unable to recover it. 00:27:33.941 [2024-07-15 23:53:22.747502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.941 [2024-07-15 23:53:22.747566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.941 [2024-07-15 23:53:22.747582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.941 [2024-07-15 23:53:22.747589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.941 [2024-07-15 23:53:22.747596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fbe50000b90 00:27:33.941 [2024-07-15 23:53:22.747611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.941 qpair failed and we were unable to recover it. 00:27:33.941 [2024-07-15 23:53:22.747631] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:33.941 A controller has encountered a failure and is being reset. 00:27:33.941 Controller properly reset. 00:27:34.200 Initializing NVMe Controllers 00:27:34.200 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.200 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:34.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:34.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:34.200 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:34.200 Initialization complete. Launching workers. 00:27:34.200 Starting thread on core 1 00:27:34.200 Starting thread on core 2 00:27:34.200 Starting thread on core 3 00:27:34.200 Starting thread on core 0 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:34.200 00:27:34.200 real 0m11.146s 00:27:34.200 user 0m21.669s 00:27:34.200 sys 0m4.259s 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:34.200 ************************************ 00:27:34.200 END TEST nvmf_target_disconnect_tc2 00:27:34.200 ************************************ 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:34.200 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:34.200 rmmod nvme_tcp 00:27:34.462 rmmod nvme_fabrics 00:27:34.462 rmmod nvme_keyring 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1164982 ']' 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1164982 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@942 -- # '[' -z 1164982 ']' 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # kill -0 1164982 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # uname 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1164982 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_4 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_4 = sudo ']' 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1164982' 00:27:34.462 killing process with pid 1164982 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@961 -- # kill 1164982 00:27:34.462 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # wait 1164982 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.721 23:53:23 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.625 23:53:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:36.626 00:27:36.626 real 0m19.164s 00:27:36.626 user 0m48.255s 00:27:36.626 sys 0m8.588s 00:27:36.626 23:53:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:36.626 23:53:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:36.626 ************************************ 00:27:36.626 END TEST nvmf_target_disconnect 00:27:36.626 ************************************ 00:27:36.626 23:53:25 nvmf_tcp -- common/autotest_common.sh@1136 -- # return 0 00:27:36.626 23:53:25 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:36.626 23:53:25 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.626 23:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.886 23:53:25 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:36.886 00:27:36.886 real 20m52.487s 00:27:36.886 user 45m6.568s 00:27:36.886 sys 6m17.516s 00:27:36.886 23:53:25 nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:36.886 23:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.886 ************************************ 00:27:36.886 END TEST nvmf_tcp 00:27:36.886 ************************************ 00:27:36.886 23:53:25 -- common/autotest_common.sh@1136 -- # return 0 00:27:36.886 23:53:25 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:36.886 23:53:25 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:36.886 23:53:25 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:27:36.886 23:53:25 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:36.886 23:53:25 -- common/autotest_common.sh@10 -- # set +x 00:27:36.886 ************************************ 00:27:36.886 START TEST spdkcli_nvmf_tcp 00:27:36.886 ************************************ 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:36.886 * Looking for test storage... 00:27:36.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1166512 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1166512 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@823 -- # '[' -z 1166512 ']' 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:36.886 23:53:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:36.886 [2024-07-15 23:53:25.830550] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:27:36.886 [2024-07-15 23:53:25.830598] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166512 ] 00:27:37.146 [2024-07-15 23:53:25.884153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:37.146 [2024-07-15 23:53:25.964826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.146 [2024-07-15 23:53:25.964830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # return 0 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.743 23:53:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:37.743 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:37.743 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:37.743 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:37.743 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:37.743 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:37.743 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:37.743 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:37.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:37.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:37.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:37.743 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:37.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:37.743 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.743 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:37.744 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:37.744 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:37.744 ' 00:27:40.275 [2024-07-15 23:53:29.047020] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.650 [2024-07-15 23:53:30.255050] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:44.179 [2024-07-15 23:53:32.634463] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:46.082 [2024-07-15 23:53:34.680775] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:47.458 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:47.458 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:47.458 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:47.458 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:47.458 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:47.458 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:47.458 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:47.458 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:47.458 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:47.458 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:47.458 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:47.458 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:47.458 23:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:48.025 23:53:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:48.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:48.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:48.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:48.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:48.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:48.025 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:48.025 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:48.025 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:48.025 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:48.025 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:48.025 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:48.025 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:48.025 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:48.025 ' 00:27:53.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:53.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:53.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:53.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:53.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:53.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:53.299 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:53.299 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:53.299 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:53.299 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:53.299 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:53.299 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:53.299 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:53.299 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1166512 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 1166512 ']' 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 1166512 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # uname 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1166512 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1166512' 00:27:53.299 killing process with pid 1166512 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@961 -- # kill 1166512 00:27:53.299 23:53:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # wait 1166512 00:27:53.299 23:53:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:53.299 23:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:53.299 23:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1166512 ']' 00:27:53.299 23:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1166512 00:27:53.299 23:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@942 -- # '[' -z 1166512 ']' 00:27:53.300 23:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # kill -0 1166512 00:27:53.300 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1166512) - No such process 00:27:53.300 23:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # echo 'Process with pid 1166512 is not found' 00:27:53.300 Process with pid 1166512 is not found 00:27:53.300 23:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:53.300 23:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:53.300 23:53:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:53.300 00:27:53.300 real 0m16.352s 00:27:53.300 user 0m34.683s 00:27:53.300 sys 0m0.782s 00:27:53.300 23:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:27:53.300 23:53:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:53.300 ************************************ 00:27:53.300 END TEST spdkcli_nvmf_tcp 00:27:53.300 ************************************ 00:27:53.300 23:53:42 -- common/autotest_common.sh@1136 -- # return 0 00:27:53.300 23:53:42 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:53.300 23:53:42 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:27:53.300 23:53:42 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:27:53.300 23:53:42 -- common/autotest_common.sh@10 -- # set +x 00:27:53.300 ************************************ 00:27:53.300 START TEST nvmf_identify_passthru 00:27:53.300 ************************************ 00:27:53.300 23:53:42 nvmf_identify_passthru -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:53.300 * Looking for test storage... 00:27:53.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:53.300 23:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.300 23:53:42 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.300 23:53:42 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.300 23:53:42 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:53.300 23:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.300 23:53:42 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.300 23:53:42 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.300 23:53:42 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:53.300 23:53:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.300 23:53:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:53.300 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.300 23:53:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:53.301 23:53:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.301 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:53.301 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:53.301 23:53:42 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:53.301 23:53:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:58.574 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:58.574 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:58.575 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:58.575 Found net devices under 0000:86:00.0: cvl_0_0 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:58.575 Found net devices under 0000:86:00.1: cvl_0_1 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:27:58.575 00:27:58.575 --- 10.0.0.2 ping statistics --- 00:27:58.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.575 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:27:58.575 00:27:58.575 --- 10.0.0.1 ping statistics --- 00:27:58.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.575 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.575 23:53:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.575 23:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:58.575 23:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # bdfs=() 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1518 -- # local bdfs 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5e:00.0 00:27:58.575 23:53:47 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # echo 0000:5e:00.0 00:27:58.575 23:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:58.575 23:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:58.575 23:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:58.575 23:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:58.575 23:53:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:28:02.762 23:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:02.762 23:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:02.762 23:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:02.762 23:53:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:06.955 23:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:06.955 23:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:06.955 23:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:06.955 23:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1173528 00:28:06.955 23:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:06.955 23:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:06.955 23:53:55 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1173528 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@823 -- # '[' -z 1173528 ']' 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # local max_retries=100 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # xtrace_disable 00:28:06.955 23:53:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:06.955 [2024-07-15 23:53:55.914557] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:28:06.955 [2024-07-15 23:53:55.914605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.215 [2024-07-15 23:53:55.973197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:07.215 [2024-07-15 23:53:56.055642] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.215 [2024-07-15 23:53:56.055677] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.215 [2024-07-15 23:53:56.055685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.215 [2024-07-15 23:53:56.055690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.215 [2024-07-15 23:53:56.055695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.215 [2024-07-15 23:53:56.055737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.215 [2024-07-15 23:53:56.055824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.215 [2024-07-15 23:53:56.055911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.215 [2024-07-15 23:53:56.055912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.819 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:28:07.819 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # return 0 00:28:07.819 23:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:07.819 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.819 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:07.819 INFO: Log level set to 20 00:28:07.819 INFO: Requests: 00:28:07.819 { 00:28:07.819 "jsonrpc": "2.0", 00:28:07.819 "method": "nvmf_set_config", 00:28:07.819 "id": 1, 00:28:07.819 "params": { 00:28:07.819 "admin_cmd_passthru": { 00:28:07.819 "identify_ctrlr": true 00:28:07.819 } 00:28:07.819 } 00:28:07.819 } 00:28:07.819 00:28:07.819 INFO: response: 00:28:07.819 { 00:28:07.819 "jsonrpc": "2.0", 00:28:07.819 "id": 1, 00:28:07.819 "result": true 00:28:07.819 } 00:28:07.819 00:28:07.819 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:07.819 23:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:07.819 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:07.819 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:07.819 INFO: Setting log level to 20 00:28:07.819 INFO: Setting log level to 20 00:28:07.819 INFO: Log level set to 20 00:28:07.819 INFO: Log level set to 20 00:28:07.819 INFO: Requests: 00:28:07.819 { 00:28:07.819 "jsonrpc": "2.0", 00:28:07.819 "method": "framework_start_init", 00:28:07.819 "id": 1 00:28:07.819 } 00:28:07.819 00:28:07.819 INFO: Requests: 00:28:07.819 { 00:28:07.819 "jsonrpc": "2.0", 00:28:07.819 "method": "framework_start_init", 00:28:07.819 "id": 1 00:28:07.819 } 00:28:07.819 00:28:08.078 [2024-07-15 23:53:56.822137] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:08.078 INFO: response: 00:28:08.078 { 00:28:08.078 "jsonrpc": "2.0", 00:28:08.078 "id": 1, 00:28:08.078 "result": true 00:28:08.078 } 00:28:08.078 00:28:08.078 INFO: response: 00:28:08.078 { 00:28:08.078 "jsonrpc": "2.0", 00:28:08.078 "id": 1, 00:28:08.078 "result": true 00:28:08.078 } 00:28:08.078 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.078 23:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.078 INFO: Setting log level to 40 00:28:08.078 INFO: Setting log level to 40 00:28:08.078 INFO: Setting log level to 40 00:28:08.078 [2024-07-15 23:53:56.835640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:08.078 23:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.078 23:53:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:08.078 23:53:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:11.368 Nvme0n1 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:11.368 [2024-07-15 23:53:59.728162] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:11.368 [ 00:28:11.368 { 00:28:11.368 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:11.368 "subtype": "Discovery", 00:28:11.368 "listen_addresses": [], 00:28:11.368 "allow_any_host": true, 00:28:11.368 "hosts": [] 00:28:11.368 }, 00:28:11.368 { 00:28:11.368 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.368 "subtype": "NVMe", 00:28:11.368 "listen_addresses": [ 00:28:11.368 { 00:28:11.368 "trtype": "TCP", 00:28:11.368 "adrfam": "IPv4", 00:28:11.368 "traddr": "10.0.0.2", 00:28:11.368 "trsvcid": "4420" 00:28:11.368 } 00:28:11.368 ], 00:28:11.368 "allow_any_host": true, 00:28:11.368 "hosts": [], 00:28:11.368 "serial_number": "SPDK00000000000001", 00:28:11.368 "model_number": "SPDK bdev Controller", 00:28:11.368 "max_namespaces": 1, 00:28:11.368 "min_cntlid": 1, 00:28:11.368 "max_cntlid": 65519, 00:28:11.368 "namespaces": [ 00:28:11.368 { 00:28:11.368 "nsid": 1, 00:28:11.368 "bdev_name": "Nvme0n1", 00:28:11.368 "name": "Nvme0n1", 00:28:11.368 "nguid": "FE9035CD53CA4345B7AA73F0F65E090D", 00:28:11.368 "uuid": "fe9035cd-53ca-4345-b7aa-73f0f65e090d" 00:28:11.368 } 00:28:11.368 ] 00:28:11.368 } 00:28:11.368 ] 00:28:11.368 23:53:59 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:11.368 23:53:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:11.368 23:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:11.368 23:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:11.368 23:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:11.368 23:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:11.368 23:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:11.368 23:54:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.368 rmmod nvme_tcp 00:28:11.368 rmmod nvme_fabrics 00:28:11.368 rmmod nvme_keyring 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1173528 ']' 00:28:11.368 23:54:00 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1173528 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@942 -- # '[' -z 1173528 ']' 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # kill -0 1173528 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # uname 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1173528 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1173528' 00:28:11.368 killing process with pid 1173528 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@961 -- # kill 1173528 00:28:11.368 23:54:00 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # wait 1173528 00:28:12.746 23:54:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.746 23:54:01 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.746 23:54:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.746 23:54:01 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.746 23:54:01 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.746 23:54:01 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.746 23:54:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:12.746 23:54:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.283 23:54:03 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.283 00:28:15.283 real 0m21.615s 00:28:15.283 user 0m29.740s 00:28:15.283 sys 0m4.743s 00:28:15.283 23:54:03 nvmf_identify_passthru -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:15.283 23:54:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:15.283 ************************************ 00:28:15.283 END TEST nvmf_identify_passthru 00:28:15.283 ************************************ 00:28:15.283 23:54:03 -- common/autotest_common.sh@1136 -- # return 0 00:28:15.283 23:54:03 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:15.283 23:54:03 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:15.283 23:54:03 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:15.283 23:54:03 -- common/autotest_common.sh@10 -- # set +x 00:28:15.283 ************************************ 00:28:15.283 START TEST nvmf_dif 00:28:15.283 ************************************ 00:28:15.283 23:54:03 nvmf_dif -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:15.283 * Looking for test storage... 00:28:15.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.283 23:54:03 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.283 23:54:03 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.283 23:54:03 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.283 23:54:03 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.283 23:54:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.283 23:54:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.283 23:54:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.283 23:54:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:15.283 23:54:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.283 23:54:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:15.283 23:54:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:15.283 23:54:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:15.283 23:54:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:15.283 23:54:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.283 23:54:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:15.283 23:54:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.283 23:54:03 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.283 23:54:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:20.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:20.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:20.551 Found net devices under 0000:86:00.0: cvl_0_0 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:20.551 Found net devices under 0000:86:00.1: cvl_0_1 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:28:20.551 00:28:20.551 --- 10.0.0.2 ping statistics --- 00:28:20.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.551 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:28:20.551 00:28:20.551 --- 10.0.0.1 ping statistics --- 00:28:20.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.551 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:20.551 23:54:08 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:22.452 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:22.452 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:22.452 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.452 23:54:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:22.452 23:54:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1179510 00:28:22.452 23:54:11 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1179510 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@823 -- # '[' -z 1179510 ']' 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@828 -- # local max_retries=100 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@832 -- # xtrace_disable 00:28:22.452 23:54:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:22.452 [2024-07-15 23:54:11.310558] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:28:22.452 [2024-07-15 23:54:11.310600] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.452 [2024-07-15 23:54:11.363320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.710 [2024-07-15 23:54:11.442873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.710 [2024-07-15 23:54:11.442907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.710 [2024-07-15 23:54:11.442914] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.710 [2024-07-15 23:54:11.442920] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.710 [2024-07-15 23:54:11.442925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.710 [2024-07-15 23:54:11.442947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@856 -- # return 0 00:28:23.277 23:54:12 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:23.277 23:54:12 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.277 23:54:12 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:23.277 23:54:12 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:23.277 [2024-07-15 23:54:12.161389] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.277 23:54:12 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:23.277 23:54:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:23.277 ************************************ 00:28:23.277 START TEST fio_dif_1_default 00:28:23.277 ************************************ 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1117 -- # fio_dif_1 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:23.277 bdev_null0 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:23.277 [2024-07-15 23:54:12.237701] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:23.277 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:23.277 { 00:28:23.277 "params": { 00:28:23.277 "name": "Nvme$subsystem", 00:28:23.278 "trtype": "$TEST_TRANSPORT", 00:28:23.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:23.278 "adrfam": "ipv4", 00:28:23.278 "trsvcid": "$NVMF_PORT", 00:28:23.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:23.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:23.278 "hdgst": ${hdgst:-false}, 00:28:23.278 "ddgst": ${ddgst:-false} 00:28:23.278 }, 00:28:23.278 "method": "bdev_nvme_attach_controller" 00:28:23.278 } 00:28:23.278 EOF 00:28:23.278 )") 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local sanitizers 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # shift 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local asan_lib= 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libasan 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:23.278 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:23.559 "params": { 00:28:23.559 "name": "Nvme0", 00:28:23.559 "trtype": "tcp", 00:28:23.559 "traddr": "10.0.0.2", 00:28:23.559 "adrfam": "ipv4", 00:28:23.559 "trsvcid": "4420", 00:28:23.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:23.559 "hdgst": false, 00:28:23.559 "ddgst": false 00:28:23.559 }, 00:28:23.559 "method": "bdev_nvme_attach_controller" 00:28:23.559 }' 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:23.559 23:54:12 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:23.821 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:23.821 fio-3.35 00:28:23.821 Starting 1 thread 00:28:36.077 00:28:36.077 filename0: (groupid=0, jobs=1): err= 0: pid=1179882: Mon Jul 15 23:54:23 2024 00:28:36.077 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10022msec) 00:28:36.077 slat (nsec): min=4220, max=21861, avg=6304.08, stdev=1248.31 00:28:36.077 clat (usec): min=40801, max=48844, avg=41055.88, stdev=536.49 00:28:36.077 lat (usec): min=40807, max=48857, avg=41062.18, stdev=536.52 00:28:36.077 clat percentiles (usec): 00:28:36.077 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:36.077 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:36.077 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:36.077 | 99.00th=[42206], 99.50th=[42206], 99.90th=[49021], 99.95th=[49021], 00:28:36.077 | 99.99th=[49021] 00:28:36.077 bw ( KiB/s): min= 384, max= 416, per=99.60%, avg=388.80, stdev=11.72, samples=20 00:28:36.077 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:36.077 lat (msec) : 50=100.00% 00:28:36.077 cpu : usr=94.17%, sys=5.59%, ctx=14, majf=0, minf=230 00:28:36.077 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.077 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.077 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:36.077 00:28:36.077 Run status group 0 (all jobs): 00:28:36.077 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10022-10022msec 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.077 00:28:36.077 real 0m11.122s 00:28:36.077 user 0m15.900s 00:28:36.077 sys 0m0.833s 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.077 ************************************ 00:28:36.077 END TEST fio_dif_1_default 00:28:36.077 ************************************ 00:28:36.077 23:54:23 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:28:36.077 23:54:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:36.077 23:54:23 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:36.077 23:54:23 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:36.077 23:54:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.077 ************************************ 00:28:36.077 START TEST fio_dif_1_multi_subsystems 00:28:36.077 ************************************ 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1117 -- # fio_dif_1_multi_subsystems 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.077 bdev_null0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.077 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.078 [2024-07-15 23:54:23.421928] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.078 bdev_null1 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local sanitizers 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # shift 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local asan_lib= 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.078 { 00:28:36.078 "params": { 00:28:36.078 "name": "Nvme$subsystem", 00:28:36.078 "trtype": "$TEST_TRANSPORT", 00:28:36.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.078 "adrfam": "ipv4", 00:28:36.078 "trsvcid": "$NVMF_PORT", 00:28:36.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.078 "hdgst": ${hdgst:-false}, 00:28:36.078 "ddgst": ${ddgst:-false} 00:28:36.078 }, 00:28:36.078 "method": "bdev_nvme_attach_controller" 00:28:36.078 } 00:28:36.078 EOF 00:28:36.078 )") 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libasan 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.078 { 00:28:36.078 "params": { 00:28:36.078 "name": "Nvme$subsystem", 00:28:36.078 "trtype": "$TEST_TRANSPORT", 00:28:36.078 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.078 "adrfam": "ipv4", 00:28:36.078 "trsvcid": "$NVMF_PORT", 00:28:36.078 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.078 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.078 "hdgst": ${hdgst:-false}, 00:28:36.078 "ddgst": ${ddgst:-false} 00:28:36.078 }, 00:28:36.078 "method": "bdev_nvme_attach_controller" 00:28:36.078 } 00:28:36.078 EOF 00:28:36.078 )") 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:36.078 "params": { 00:28:36.078 "name": "Nvme0", 00:28:36.078 "trtype": "tcp", 00:28:36.078 "traddr": "10.0.0.2", 00:28:36.078 "adrfam": "ipv4", 00:28:36.078 "trsvcid": "4420", 00:28:36.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.078 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.078 "hdgst": false, 00:28:36.078 "ddgst": false 00:28:36.078 }, 00:28:36.078 "method": "bdev_nvme_attach_controller" 00:28:36.078 },{ 00:28:36.078 "params": { 00:28:36.078 "name": "Nvme1", 00:28:36.078 "trtype": "tcp", 00:28:36.078 "traddr": "10.0.0.2", 00:28:36.078 "adrfam": "ipv4", 00:28:36.078 "trsvcid": "4420", 00:28:36.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.078 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.078 "hdgst": false, 00:28:36.078 "ddgst": false 00:28:36.078 }, 00:28:36.078 "method": "bdev_nvme_attach_controller" 00:28:36.078 }' 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:36.078 23:54:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.078 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:36.078 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:36.078 fio-3.35 00:28:36.078 Starting 2 threads 00:28:46.083 00:28:46.083 filename0: (groupid=0, jobs=1): err= 0: pid=1181852: Mon Jul 15 23:54:34 2024 00:28:46.083 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:28:46.083 slat (nsec): min=6006, max=24826, avg=7127.06, stdev=1902.51 00:28:46.083 clat (usec): min=762, max=43595, avg=21038.15, stdev=20187.50 00:28:46.083 lat (usec): min=769, max=43620, avg=21045.28, stdev=20186.98 00:28:46.083 clat percentiles (usec): 00:28:46.083 | 1.00th=[ 775], 5.00th=[ 783], 10.00th=[ 791], 20.00th=[ 799], 00:28:46.083 | 30.00th=[ 807], 40.00th=[ 840], 50.00th=[40633], 60.00th=[41157], 00:28:46.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:46.083 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:28:46.083 | 99.99th=[43779] 00:28:46.083 bw ( KiB/s): min= 704, max= 768, per=66.00%, avg=758.40, stdev=23.45, samples=20 00:28:46.083 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:28:46.083 lat (usec) : 1000=49.32% 00:28:46.083 lat (msec) : 2=0.58%, 50=50.11% 00:28:46.083 cpu : usr=97.77%, sys=1.98%, ctx=10, majf=0, minf=71 00:28:46.083 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.083 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.083 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:46.083 filename1: (groupid=0, jobs=1): err= 0: pid=1181853: Mon Jul 15 23:54:34 2024 00:28:46.083 read: IOPS=97, BW=389KiB/s (398kB/s)(3888KiB/10003msec) 00:28:46.083 slat (nsec): min=6005, max=26626, avg=7785.66, stdev=2583.22 00:28:46.083 clat (usec): min=40789, max=44634, avg=41139.41, stdev=419.53 00:28:46.083 lat (usec): min=40795, max=44661, avg=41147.20, stdev=419.89 00:28:46.083 clat percentiles (usec): 00:28:46.083 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:46.083 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:46.083 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:28:46.083 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44827], 99.95th=[44827], 00:28:46.083 | 99.99th=[44827] 00:28:46.083 bw ( KiB/s): min= 384, max= 416, per=33.70%, avg=387.20, stdev= 9.85, samples=20 00:28:46.083 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:28:46.083 lat (msec) : 50=100.00% 00:28:46.083 cpu : usr=97.79%, sys=1.96%, ctx=14, majf=0, minf=175 00:28:46.083 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.083 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.083 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:46.083 00:28:46.083 Run status group 0 (all jobs): 00:28:46.083 READ: bw=1148KiB/s (1176kB/s), 389KiB/s-760KiB/s (398kB/s-778kB/s), io=11.2MiB (11.8MB), run=10003-10003msec 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 00:28:46.083 real 0m11.262s 00:28:46.083 user 0m26.990s 00:28:46.083 sys 0m0.679s 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1118 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 ************************************ 00:28:46.083 END TEST fio_dif_1_multi_subsystems 00:28:46.083 ************************************ 00:28:46.083 23:54:34 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:28:46.083 23:54:34 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:46.083 23:54:34 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:28:46.083 23:54:34 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 ************************************ 00:28:46.083 START TEST fio_dif_rand_params 00:28:46.083 ************************************ 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1117 -- # fio_dif_rand_params 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 bdev_null0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.083 [2024-07-15 23:54:34.748543] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.083 { 00:28:46.083 "params": { 00:28:46.083 "name": "Nvme$subsystem", 00:28:46.083 "trtype": "$TEST_TRANSPORT", 00:28:46.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.083 "adrfam": "ipv4", 00:28:46.083 "trsvcid": "$NVMF_PORT", 00:28:46.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.083 "hdgst": ${hdgst:-false}, 00:28:46.083 "ddgst": ${ddgst:-false} 00:28:46.083 }, 00:28:46.083 "method": "bdev_nvme_attach_controller" 00:28:46.083 } 00:28:46.083 EOF 00:28:46.083 )") 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:46.083 "params": { 00:28:46.083 "name": "Nvme0", 00:28:46.083 "trtype": "tcp", 00:28:46.083 "traddr": "10.0.0.2", 00:28:46.083 "adrfam": "ipv4", 00:28:46.083 "trsvcid": "4420", 00:28:46.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:46.083 "hdgst": false, 00:28:46.083 "ddgst": false 00:28:46.083 }, 00:28:46.083 "method": "bdev_nvme_attach_controller" 00:28:46.083 }' 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:46.083 23:54:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:46.343 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:46.343 ... 00:28:46.343 fio-3.35 00:28:46.343 Starting 3 threads 00:28:52.882 00:28:52.882 filename0: (groupid=0, jobs=1): err= 0: pid=1183813: Mon Jul 15 23:54:40 2024 00:28:52.882 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(174MiB/5030msec) 00:28:52.882 slat (nsec): min=6300, max=70260, avg=11709.05, stdev=5930.81 00:28:52.882 clat (usec): min=3845, max=93747, avg=10823.48, stdev=11808.51 00:28:52.882 lat (usec): min=3853, max=93759, avg=10835.19, stdev=11808.95 00:28:52.882 clat percentiles (usec): 00:28:52.882 | 1.00th=[ 4146], 5.00th=[ 4424], 10.00th=[ 4752], 20.00th=[ 5473], 00:28:52.882 | 30.00th=[ 6325], 40.00th=[ 6980], 50.00th=[ 7504], 60.00th=[ 8160], 00:28:52.882 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[11469], 95.00th=[48497], 00:28:52.882 | 99.00th=[52167], 99.50th=[52691], 99.90th=[89654], 99.95th=[93848], 00:28:52.882 | 99.99th=[93848] 00:28:52.882 bw ( KiB/s): min=24320, max=51200, per=35.61%, avg=35558.40, stdev=7229.15, samples=10 00:28:52.882 iops : min= 190, max= 400, avg=277.80, stdev=56.48, samples=10 00:28:52.882 lat (msec) : 4=0.36%, 10=77.37%, 20=14.44%, 50=4.53%, 100=3.30% 00:28:52.882 cpu : usr=94.49%, sys=4.30%, ctx=271, majf=0, minf=180 00:28:52.882 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:52.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.882 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.882 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:52.882 filename0: (groupid=0, jobs=1): err= 0: pid=1183814: Mon Jul 15 23:54:40 2024 00:28:52.882 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(162MiB/5007msec) 00:28:52.882 slat (nsec): min=6181, max=47431, avg=10893.85, stdev=5535.82 00:28:52.882 clat (usec): min=4004, max=92449, avg=11591.86, stdev=12407.44 00:28:52.882 lat (usec): min=4011, max=92461, avg=11602.76, stdev=12408.11 00:28:52.882 clat percentiles (usec): 00:28:52.882 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 5800], 00:28:52.882 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7832], 60.00th=[ 8586], 00:28:52.882 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[12780], 95.00th=[49546], 00:28:52.882 | 99.00th=[52167], 99.50th=[52167], 99.90th=[90702], 99.95th=[92799], 00:28:52.882 | 99.99th=[92799] 00:28:52.882 bw ( KiB/s): min=19456, max=44633, per=33.11%, avg=33058.50, stdev=7357.61, samples=10 00:28:52.882 iops : min= 152, max= 348, avg=258.20, stdev=57.36, samples=10 00:28:52.882 lat (msec) : 10=72.49%, 20=18.62%, 50=5.64%, 100=3.25% 00:28:52.882 cpu : usr=96.88%, sys=2.76%, ctx=10, majf=0, minf=116 00:28:52.882 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:52.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.882 issued rwts: total=1294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.882 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:52.882 filename0: (groupid=0, jobs=1): err= 0: pid=1183815: Mon Jul 15 23:54:40 2024 00:28:52.882 read: IOPS=247, BW=30.9MiB/s (32.4MB/s)(155MiB/5011msec) 00:28:52.882 slat (nsec): min=6291, max=54060, avg=11165.56, stdev=5611.13 00:28:52.882 clat (usec): min=3786, max=91997, avg=12127.67, stdev=13978.30 00:28:52.882 lat (usec): min=3793, max=92010, avg=12138.84, stdev=13978.56 00:28:52.882 clat percentiles (usec): 00:28:52.882 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 6063], 00:28:52.882 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7373], 60.00th=[ 7898], 00:28:52.882 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[47973], 95.00th=[49021], 00:28:52.882 | 99.00th=[51119], 99.50th=[51643], 99.90th=[90702], 99.95th=[91751], 00:28:52.882 | 99.99th=[91751] 00:28:52.882 bw ( KiB/s): min=18432, max=42496, per=31.66%, avg=31616.00, stdev=7531.85, samples=10 00:28:52.882 iops : min= 144, max= 332, avg=247.00, stdev=58.84, samples=10 00:28:52.882 lat (msec) : 4=0.08%, 10=86.83%, 20=1.45%, 50=8.64%, 100=2.99% 00:28:52.882 cpu : usr=96.27%, sys=3.37%, ctx=7, majf=0, minf=64 00:28:52.882 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:52.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:52.882 issued rwts: total=1238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:52.882 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:52.882 00:28:52.882 Run status group 0 (all jobs): 00:28:52.882 READ: bw=97.5MiB/s (102MB/s), 30.9MiB/s-34.6MiB/s (32.4MB/s-36.3MB/s), io=491MiB (514MB), run=5007-5030msec 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.882 bdev_null0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.882 [2024-07-15 23:54:40.958036] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.882 bdev_null1 00:28:52.882 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.883 23:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.883 bdev_null2 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.883 { 00:28:52.883 "params": { 00:28:52.883 "name": "Nvme$subsystem", 00:28:52.883 "trtype": "$TEST_TRANSPORT", 00:28:52.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.883 "adrfam": "ipv4", 00:28:52.883 "trsvcid": "$NVMF_PORT", 00:28:52.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.883 "hdgst": ${hdgst:-false}, 00:28:52.883 "ddgst": ${ddgst:-false} 00:28:52.883 }, 00:28:52.883 "method": "bdev_nvme_attach_controller" 00:28:52.883 } 00:28:52.883 EOF 00:28:52.883 )") 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.883 { 00:28:52.883 "params": { 00:28:52.883 "name": "Nvme$subsystem", 00:28:52.883 "trtype": "$TEST_TRANSPORT", 00:28:52.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.883 "adrfam": "ipv4", 00:28:52.883 "trsvcid": "$NVMF_PORT", 00:28:52.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.883 "hdgst": ${hdgst:-false}, 00:28:52.883 "ddgst": ${ddgst:-false} 00:28:52.883 }, 00:28:52.883 "method": "bdev_nvme_attach_controller" 00:28:52.883 } 00:28:52.883 EOF 00:28:52.883 )") 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:52.883 { 00:28:52.883 "params": { 00:28:52.883 "name": "Nvme$subsystem", 00:28:52.883 "trtype": "$TEST_TRANSPORT", 00:28:52.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:52.883 "adrfam": "ipv4", 00:28:52.883 "trsvcid": "$NVMF_PORT", 00:28:52.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:52.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:52.883 "hdgst": ${hdgst:-false}, 00:28:52.883 "ddgst": ${ddgst:-false} 00:28:52.883 }, 00:28:52.883 "method": "bdev_nvme_attach_controller" 00:28:52.883 } 00:28:52.883 EOF 00:28:52.883 )") 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:52.883 "params": { 00:28:52.883 "name": "Nvme0", 00:28:52.883 "trtype": "tcp", 00:28:52.883 "traddr": "10.0.0.2", 00:28:52.883 "adrfam": "ipv4", 00:28:52.883 "trsvcid": "4420", 00:28:52.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:52.883 "hdgst": false, 00:28:52.883 "ddgst": false 00:28:52.883 }, 00:28:52.883 "method": "bdev_nvme_attach_controller" 00:28:52.883 },{ 00:28:52.883 "params": { 00:28:52.883 "name": "Nvme1", 00:28:52.883 "trtype": "tcp", 00:28:52.883 "traddr": "10.0.0.2", 00:28:52.883 "adrfam": "ipv4", 00:28:52.883 "trsvcid": "4420", 00:28:52.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:52.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:52.883 "hdgst": false, 00:28:52.883 "ddgst": false 00:28:52.883 }, 00:28:52.883 "method": "bdev_nvme_attach_controller" 00:28:52.883 },{ 00:28:52.883 "params": { 00:28:52.883 "name": "Nvme2", 00:28:52.883 "trtype": "tcp", 00:28:52.883 "traddr": "10.0.0.2", 00:28:52.883 "adrfam": "ipv4", 00:28:52.883 "trsvcid": "4420", 00:28:52.883 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:52.883 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:52.883 "hdgst": false, 00:28:52.883 "ddgst": false 00:28:52.883 }, 00:28:52.883 "method": "bdev_nvme_attach_controller" 00:28:52.883 }' 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:52.883 23:54:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:52.883 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:52.883 ... 00:28:52.883 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:52.883 ... 00:28:52.883 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:52.883 ... 00:28:52.884 fio-3.35 00:28:52.884 Starting 24 threads 00:29:05.060 00:29:05.060 filename0: (groupid=0, jobs=1): err= 0: pid=1185041: Mon Jul 15 23:54:52 2024 00:29:05.060 read: IOPS=573, BW=2294KiB/s (2349kB/s)(22.4MiB/10017msec) 00:29:05.060 slat (nsec): min=7663, max=80152, avg=20829.42, stdev=8674.11 00:29:05.060 clat (usec): min=4407, max=32987, avg=27726.13, stdev=1673.80 00:29:05.060 lat (usec): min=4416, max=33004, avg=27746.96, stdev=1674.09 00:29:05.060 clat percentiles (usec): 00:29:05.060 | 1.00th=[25822], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.060 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.060 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.060 | 99.00th=[29230], 99.50th=[29492], 99.90th=[32900], 99.95th=[32900], 00:29:05.060 | 99.99th=[32900] 00:29:05.060 bw ( KiB/s): min= 2176, max= 2432, per=4.19%, avg=2290.53, stdev=58.73, samples=19 00:29:05.060 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:29:05.060 lat (msec) : 10=0.28%, 20=0.56%, 50=99.16% 00:29:05.060 cpu : usr=98.58%, sys=1.05%, ctx=9, majf=0, minf=26 00:29:05.060 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 issued rwts: total=5744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.060 filename0: (groupid=0, jobs=1): err= 0: pid=1185042: Mon Jul 15 23:54:52 2024 00:29:05.060 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10003msec) 00:29:05.060 slat (nsec): min=6817, max=85592, avg=19805.46, stdev=15573.38 00:29:05.060 clat (usec): min=9203, max=62063, avg=27996.96, stdev=1850.54 00:29:05.060 lat (usec): min=9210, max=62077, avg=28016.77, stdev=1850.98 00:29:05.060 clat percentiles (usec): 00:29:05.060 | 1.00th=[26608], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:29:05.060 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.060 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:05.060 | 99.00th=[29754], 99.50th=[34866], 99.90th=[48497], 99.95th=[62129], 00:29:05.060 | 99.99th=[62129] 00:29:05.060 bw ( KiB/s): min= 2100, max= 2304, per=4.15%, avg=2270.53, stdev=50.37, samples=19 00:29:05.060 iops : min= 525, max= 576, avg=567.63, stdev=12.59, samples=19 00:29:05.060 lat (msec) : 10=0.11%, 20=0.37%, 50=99.46%, 100=0.07% 00:29:05.060 cpu : usr=98.97%, sys=0.64%, ctx=9, majf=0, minf=33 00:29:05.060 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=81.1%, 16=18.6%, 32=0.0%, >=64=0.0% 00:29:05.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 complete : 0=0.0%, 4=89.5%, 8=10.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 issued rwts: total=5702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.060 filename0: (groupid=0, jobs=1): err= 0: pid=1185043: Mon Jul 15 23:54:52 2024 00:29:05.060 read: IOPS=569, BW=2280KiB/s (2335kB/s)(22.3MiB/10018msec) 00:29:05.060 slat (nsec): min=7117, max=74887, avg=14835.32, stdev=8002.16 00:29:05.060 clat (usec): min=17194, max=51295, avg=27959.98, stdev=1172.51 00:29:05.060 lat (usec): min=17208, max=51310, avg=27974.81, stdev=1172.25 00:29:05.060 clat percentiles (usec): 00:29:05.060 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.060 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.060 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:05.060 | 99.00th=[29492], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:29:05.060 | 99.99th=[51119] 00:29:05.060 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=47.72, samples=19 00:29:05.060 iops : min= 544, max= 576, avg=569.26, stdev=11.93, samples=19 00:29:05.060 lat (msec) : 20=0.35%, 50=99.61%, 100=0.04% 00:29:05.060 cpu : usr=98.83%, sys=0.77%, ctx=14, majf=0, minf=30 00:29:05.060 IO depths : 1=0.2%, 2=6.4%, 4=24.7%, 8=56.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:05.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 issued rwts: total=5710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.060 filename0: (groupid=0, jobs=1): err= 0: pid=1185044: Mon Jul 15 23:54:52 2024 00:29:05.060 read: IOPS=573, BW=2295KiB/s (2350kB/s)(22.4MiB/10003msec) 00:29:05.060 slat (nsec): min=4546, max=80466, avg=20946.14, stdev=11592.08 00:29:05.060 clat (usec): min=4013, max=48482, avg=27705.64, stdev=2364.54 00:29:05.060 lat (usec): min=4020, max=48495, avg=27726.59, stdev=2364.86 00:29:05.060 clat percentiles (usec): 00:29:05.060 | 1.00th=[18220], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:29:05.060 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.060 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28705], 00:29:05.060 | 99.00th=[33162], 99.50th=[36439], 99.90th=[48497], 99.95th=[48497], 00:29:05.060 | 99.99th=[48497] 00:29:05.060 bw ( KiB/s): min= 2176, max= 2416, per=4.16%, avg=2277.26, stdev=66.72, samples=19 00:29:05.060 iops : min= 544, max= 604, avg=569.32, stdev=16.68, samples=19 00:29:05.060 lat (msec) : 10=0.17%, 20=1.53%, 50=98.29% 00:29:05.060 cpu : usr=98.93%, sys=0.67%, ctx=25, majf=0, minf=25 00:29:05.060 IO depths : 1=5.3%, 2=10.5%, 4=21.6%, 8=54.8%, 16=7.8%, 32=0.0%, >=64=0.0% 00:29:05.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 complete : 0=0.0%, 4=93.3%, 8=1.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.060 issued rwts: total=5738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.060 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.060 filename0: (groupid=0, jobs=1): err= 0: pid=1185045: Mon Jul 15 23:54:52 2024 00:29:05.060 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:29:05.060 slat (nsec): min=6227, max=77479, avg=21139.98, stdev=10262.68 00:29:05.060 clat (usec): min=12071, max=55803, avg=27853.47, stdev=1488.83 00:29:05.060 lat (usec): min=12083, max=55820, avg=27874.61, stdev=1488.03 00:29:05.060 clat percentiles (usec): 00:29:05.060 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.060 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.060 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.060 | 99.00th=[28967], 99.50th=[29492], 99.90th=[46400], 99.95th=[55837], 00:29:05.060 | 99.99th=[55837] 00:29:05.060 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:29:05.061 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:05.061 lat (msec) : 20=0.35%, 50=99.58%, 100=0.07% 00:29:05.061 cpu : usr=98.86%, sys=0.75%, ctx=14, majf=0, minf=35 00:29:05.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename0: (groupid=0, jobs=1): err= 0: pid=1185046: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=570, BW=2283KiB/s (2337kB/s)(22.3MiB/10010msec) 00:29:05.061 slat (nsec): min=7192, max=81556, avg=22937.85, stdev=10624.58 00:29:05.061 clat (usec): min=15927, max=40582, avg=27825.47, stdev=732.80 00:29:05.061 lat (usec): min=15936, max=40599, avg=27848.41, stdev=732.74 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[28967], 99.50th=[29754], 99.90th=[33424], 99.95th=[40633], 00:29:05.061 | 99.99th=[40633] 00:29:05.061 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:29:05.061 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:05.061 lat (msec) : 20=0.11%, 50=99.89% 00:29:05.061 cpu : usr=98.75%, sys=0.86%, ctx=7, majf=0, minf=23 00:29:05.061 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename0: (groupid=0, jobs=1): err= 0: pid=1185047: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=565, BW=2263KiB/s (2317kB/s)(22.2MiB/10044msec) 00:29:05.061 slat (nsec): min=4381, max=65021, avg=20427.15, stdev=10578.66 00:29:05.061 clat (usec): min=11171, max=48630, avg=28035.67, stdev=2733.47 00:29:05.061 lat (usec): min=11184, max=48643, avg=28056.09, stdev=2732.60 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[18220], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[29230], 00:29:05.061 | 99.00th=[38536], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:29:05.061 | 99.99th=[48497] 00:29:05.061 bw ( KiB/s): min= 2160, max= 2432, per=4.15%, avg=2272.00, stdev=64.42, samples=20 00:29:05.061 iops : min= 540, max= 608, avg=568.00, stdev=16.10, samples=20 00:29:05.061 lat (msec) : 20=2.02%, 50=97.98% 00:29:05.061 cpu : usr=98.33%, sys=0.99%, ctx=221, majf=0, minf=35 00:29:05.061 IO depths : 1=0.2%, 2=4.8%, 4=21.3%, 8=60.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=93.8%, 8=1.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename0: (groupid=0, jobs=1): err= 0: pid=1185049: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=568, BW=2273KiB/s (2328kB/s)(22.2MiB/10004msec) 00:29:05.061 slat (nsec): min=4808, max=77580, avg=22307.22, stdev=12821.80 00:29:05.061 clat (usec): min=4505, max=55012, avg=28006.56, stdev=2930.23 00:29:05.061 lat (usec): min=4512, max=55029, avg=28028.87, stdev=2929.75 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[19268], 5.00th=[26870], 10.00th=[27395], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[30278], 00:29:05.061 | 99.00th=[38011], 99.50th=[43254], 99.90th=[49021], 99.95th=[49021], 00:29:05.061 | 99.99th=[54789] 00:29:05.061 bw ( KiB/s): min= 2128, max= 2304, per=4.13%, avg=2261.89, stdev=59.86, samples=19 00:29:05.061 iops : min= 532, max= 576, avg=565.47, stdev=14.96, samples=19 00:29:05.061 lat (msec) : 10=0.18%, 20=1.71%, 50=98.08%, 100=0.04% 00:29:05.061 cpu : usr=98.53%, sys=1.07%, ctx=21, majf=0, minf=48 00:29:05.061 IO depths : 1=1.5%, 2=4.6%, 4=14.4%, 8=66.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=92.1%, 8=4.6%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185050: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=567, BW=2271KiB/s (2326kB/s)(22.2MiB/10014msec) 00:29:05.061 slat (nsec): min=6748, max=73330, avg=18880.15, stdev=9213.76 00:29:05.061 clat (usec): min=13253, max=69232, avg=28027.38, stdev=2813.67 00:29:05.061 lat (usec): min=13267, max=69251, avg=28046.26, stdev=2813.24 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[18482], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28705], 00:29:05.061 | 99.00th=[39060], 99.50th=[40633], 99.90th=[52167], 99.95th=[52167], 00:29:05.061 | 99.99th=[69731] 00:29:05.061 bw ( KiB/s): min= 2048, max= 2304, per=4.14%, avg=2266.11, stdev=68.76, samples=19 00:29:05.061 iops : min= 512, max= 576, avg=566.53, stdev=17.19, samples=19 00:29:05.061 lat (msec) : 20=2.00%, 50=97.71%, 100=0.28% 00:29:05.061 cpu : usr=98.88%, sys=0.73%, ctx=12, majf=0, minf=29 00:29:05.061 IO depths : 1=5.0%, 2=10.4%, 4=22.8%, 8=54.1%, 16=7.6%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185051: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10002msec) 00:29:05.061 slat (nsec): min=8081, max=81031, avg=21826.18, stdev=10196.58 00:29:05.061 clat (usec): min=25305, max=48374, avg=27907.66, stdev=1131.96 00:29:05.061 lat (usec): min=25330, max=48402, avg=27929.48, stdev=1131.43 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[29230], 99.50th=[29754], 99.90th=[48497], 99.95th=[48497], 00:29:05.061 | 99.99th=[48497] 00:29:05.061 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:29:05.061 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:05.061 lat (msec) : 50=100.00% 00:29:05.061 cpu : usr=98.90%, sys=0.71%, ctx=11, majf=0, minf=41 00:29:05.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185052: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:29:05.061 slat (nsec): min=6157, max=81223, avg=21480.85, stdev=11558.39 00:29:05.061 clat (usec): min=10980, max=54590, avg=27825.79, stdev=1810.65 00:29:05.061 lat (usec): min=10993, max=54607, avg=27847.27, stdev=1809.91 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[29230], 99.50th=[29754], 99.90th=[54789], 99.95th=[54789], 00:29:05.061 | 99.99th=[54789] 00:29:05.061 bw ( KiB/s): min= 2052, max= 2304, per=4.15%, avg=2270.53, stdev=71.25, samples=19 00:29:05.061 iops : min= 513, max= 576, avg=567.63, stdev=17.81, samples=19 00:29:05.061 lat (msec) : 20=0.56%, 50=99.16%, 100=0.28% 00:29:05.061 cpu : usr=98.94%, sys=0.67%, ctx=7, majf=0, minf=27 00:29:05.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185053: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10008msec) 00:29:05.061 slat (nsec): min=4951, max=79333, avg=23337.32, stdev=10639.92 00:29:05.061 clat (usec): min=18134, max=52604, avg=27898.71, stdev=1399.14 00:29:05.061 lat (usec): min=18142, max=52619, avg=27922.05, stdev=1398.24 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[28967], 99.50th=[29754], 99.90th=[52691], 99.95th=[52691], 00:29:05.061 | 99.99th=[52691] 00:29:05.061 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:29:05.061 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:05.061 lat (msec) : 20=0.07%, 50=99.65%, 100=0.28% 00:29:05.061 cpu : usr=98.77%, sys=0.85%, ctx=5, majf=0, minf=27 00:29:05.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185054: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10003msec) 00:29:05.061 slat (nsec): min=5081, max=76721, avg=20642.45, stdev=11729.68 00:29:05.061 clat (usec): min=11964, max=54440, avg=27817.86, stdev=3208.13 00:29:05.061 lat (usec): min=11979, max=54456, avg=27838.51, stdev=3208.95 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[16909], 5.00th=[22938], 10.00th=[27395], 20.00th=[27657], 00:29:05.061 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28443], 95.00th=[30802], 00:29:05.061 | 99.00th=[39060], 99.50th=[40633], 99.90th=[48497], 99.95th=[48497], 00:29:05.061 | 99.99th=[54264] 00:29:05.061 bw ( KiB/s): min= 2164, max= 2416, per=4.18%, avg=2285.68, stdev=62.73, samples=19 00:29:05.061 iops : min= 541, max= 604, avg=571.42, stdev=15.68, samples=19 00:29:05.061 lat (msec) : 20=3.70%, 50=96.27%, 100=0.03% 00:29:05.061 cpu : usr=98.48%, sys=1.11%, ctx=19, majf=0, minf=27 00:29:05.061 IO depths : 1=0.2%, 2=2.3%, 4=10.3%, 8=71.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=91.3%, 8=6.2%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185055: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:29:05.061 slat (nsec): min=6107, max=76584, avg=20818.46, stdev=11990.72 00:29:05.061 clat (usec): min=11646, max=56269, avg=27860.30, stdev=1751.34 00:29:05.061 lat (usec): min=11653, max=56287, avg=27881.12, stdev=1750.77 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[21890], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[29492], 99.50th=[36963], 99.90th=[46400], 99.95th=[56361], 00:29:05.061 | 99.99th=[56361] 00:29:05.061 bw ( KiB/s): min= 2128, max= 2304, per=4.16%, avg=2277.05, stdev=55.95, samples=19 00:29:05.061 iops : min= 532, max= 576, avg=569.26, stdev=13.99, samples=19 00:29:05.061 lat (msec) : 20=0.84%, 50=99.11%, 100=0.05% 00:29:05.061 cpu : usr=98.78%, sys=0.84%, ctx=14, majf=0, minf=24 00:29:05.061 IO depths : 1=5.8%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185056: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:29:05.061 slat (nsec): min=6433, max=79214, avg=22019.49, stdev=8734.86 00:29:05.061 clat (usec): min=18650, max=52213, avg=27926.45, stdev=1369.35 00:29:05.061 lat (usec): min=18658, max=52236, avg=27948.47, stdev=1368.53 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[28967], 99.50th=[29754], 99.90th=[52167], 99.95th=[52167], 00:29:05.061 | 99.99th=[52167] 00:29:05.061 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:29:05.061 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:05.061 lat (msec) : 20=0.07%, 50=99.65%, 100=0.28% 00:29:05.061 cpu : usr=98.82%, sys=0.80%, ctx=13, majf=0, minf=30 00:29:05.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename1: (groupid=0, jobs=1): err= 0: pid=1185057: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=569, BW=2276KiB/s (2331kB/s)(22.2MiB/10009msec) 00:29:05.061 slat (nsec): min=7792, max=77963, avg=22472.22, stdev=9282.78 00:29:05.061 clat (usec): min=15795, max=70700, avg=27929.10, stdev=1584.68 00:29:05.061 lat (usec): min=15805, max=70718, avg=27951.57, stdev=1584.15 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[28967], 99.50th=[29492], 99.90th=[54264], 99.95th=[54264], 00:29:05.061 | 99.99th=[70779] 00:29:05.061 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2270.32, stdev=71.93, samples=19 00:29:05.061 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:29:05.061 lat (msec) : 20=0.11%, 50=99.61%, 100=0.28% 00:29:05.061 cpu : usr=98.85%, sys=0.74%, ctx=12, majf=0, minf=30 00:29:05.061 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename2: (groupid=0, jobs=1): err= 0: pid=1185059: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10008msec) 00:29:05.061 slat (nsec): min=7823, max=81378, avg=22711.35, stdev=9230.57 00:29:05.061 clat (usec): min=25177, max=52243, avg=27914.73, stdev=1330.79 00:29:05.061 lat (usec): min=25201, max=52256, avg=27937.45, stdev=1329.90 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[28967], 99.50th=[29492], 99.90th=[52167], 99.95th=[52167], 00:29:05.061 | 99.99th=[52167] 00:29:05.061 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:29:05.061 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:29:05.061 lat (msec) : 50=99.72%, 100=0.28% 00:29:05.061 cpu : usr=98.64%, sys=0.97%, ctx=8, majf=0, minf=25 00:29:05.061 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename2: (groupid=0, jobs=1): err= 0: pid=1185060: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=576, BW=2307KiB/s (2363kB/s)(22.5MiB/10003msec) 00:29:05.061 slat (nsec): min=6799, max=77359, avg=20175.13, stdev=11149.77 00:29:05.061 clat (usec): min=12303, max=52622, avg=27578.20, stdev=3068.25 00:29:05.061 lat (usec): min=12311, max=52648, avg=27598.37, stdev=3068.81 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[16909], 5.00th=[23200], 10.00th=[26870], 20.00th=[27657], 00:29:05.061 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28967], 00:29:05.061 | 99.00th=[38536], 99.50th=[39584], 99.90th=[52691], 99.95th=[52691], 00:29:05.061 | 99.99th=[52691] 00:29:05.061 bw ( KiB/s): min= 2096, max= 2464, per=4.21%, avg=2301.47, stdev=90.39, samples=19 00:29:05.061 iops : min= 524, max= 616, avg=575.37, stdev=22.60, samples=19 00:29:05.061 lat (msec) : 20=3.62%, 50=96.10%, 100=0.28% 00:29:05.061 cpu : usr=98.90%, sys=0.70%, ctx=16, majf=0, minf=26 00:29:05.061 IO depths : 1=4.5%, 2=9.1%, 4=19.4%, 8=58.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:29:05.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 complete : 0=0.0%, 4=92.8%, 8=2.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.061 issued rwts: total=5770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.061 filename2: (groupid=0, jobs=1): err= 0: pid=1185061: Mon Jul 15 23:54:52 2024 00:29:05.061 read: IOPS=576, BW=2305KiB/s (2360kB/s)(22.6MiB/10025msec) 00:29:05.061 slat (nsec): min=3294, max=72950, avg=15774.32, stdev=7548.87 00:29:05.061 clat (usec): min=3542, max=35983, avg=27639.78, stdev=2399.65 00:29:05.061 lat (usec): min=3557, max=35993, avg=27655.55, stdev=2399.80 00:29:05.061 clat percentiles (usec): 00:29:05.061 | 1.00th=[ 7504], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.061 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.061 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.061 | 99.00th=[29230], 99.50th=[29754], 99.90th=[29754], 99.95th=[35914], 00:29:05.061 | 99.99th=[35914] 00:29:05.062 bw ( KiB/s): min= 2176, max= 2688, per=4.21%, avg=2304.00, stdev=104.51, samples=19 00:29:05.062 iops : min= 544, max= 672, avg=576.00, stdev=26.13, samples=19 00:29:05.062 lat (msec) : 4=0.28%, 10=0.83%, 20=0.07%, 50=98.82% 00:29:05.062 cpu : usr=98.76%, sys=0.85%, ctx=16, majf=0, minf=31 00:29:05.062 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:05.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 issued rwts: total=5776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.062 filename2: (groupid=0, jobs=1): err= 0: pid=1185062: Mon Jul 15 23:54:52 2024 00:29:05.062 read: IOPS=586, BW=2346KiB/s (2402kB/s)(22.9MiB/10009msec) 00:29:05.062 slat (nsec): min=4289, max=68461, avg=13367.07, stdev=6348.72 00:29:05.062 clat (usec): min=3555, max=32933, avg=27169.31, stdev=3129.71 00:29:05.062 lat (usec): min=3572, max=32944, avg=27182.68, stdev=3130.31 00:29:05.062 clat percentiles (usec): 00:29:05.062 | 1.00th=[ 9634], 5.00th=[19792], 10.00th=[27395], 20.00th=[27657], 00:29:05.062 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.062 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.062 | 99.00th=[28967], 99.50th=[29492], 99.90th=[32637], 99.95th=[32900], 00:29:05.062 | 99.99th=[32900] 00:29:05.062 bw ( KiB/s): min= 2176, max= 3056, per=4.28%, avg=2343.58, stdev=201.71, samples=19 00:29:05.062 iops : min= 544, max= 764, avg=585.89, stdev=50.43, samples=19 00:29:05.062 lat (msec) : 4=0.27%, 10=0.82%, 20=4.36%, 50=94.55% 00:29:05.062 cpu : usr=98.87%, sys=0.72%, ctx=17, majf=0, minf=28 00:29:05.062 IO depths : 1=5.8%, 2=11.6%, 4=23.7%, 8=52.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:29:05.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 issued rwts: total=5870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.062 filename2: (groupid=0, jobs=1): err= 0: pid=1185063: Mon Jul 15 23:54:52 2024 00:29:05.062 read: IOPS=569, BW=2278KiB/s (2333kB/s)(22.2MiB/10001msec) 00:29:05.062 slat (nsec): min=7194, max=80039, avg=20072.43, stdev=7979.82 00:29:05.062 clat (usec): min=21829, max=48383, avg=27925.17, stdev=1160.62 00:29:05.062 lat (usec): min=21837, max=48413, avg=27945.24, stdev=1160.23 00:29:05.062 clat percentiles (usec): 00:29:05.062 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.062 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.062 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.062 | 99.00th=[29230], 99.50th=[29754], 99.90th=[48497], 99.95th=[48497], 00:29:05.062 | 99.99th=[48497] 00:29:05.062 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:29:05.062 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:05.062 lat (msec) : 50=100.00% 00:29:05.062 cpu : usr=99.00%, sys=0.64%, ctx=8, majf=0, minf=35 00:29:05.062 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.062 filename2: (groupid=0, jobs=1): err= 0: pid=1185064: Mon Jul 15 23:54:52 2024 00:29:05.062 read: IOPS=570, BW=2283KiB/s (2338kB/s)(22.3MiB/10009msec) 00:29:05.062 slat (nsec): min=6230, max=81529, avg=20728.52, stdev=9090.23 00:29:05.062 clat (usec): min=13521, max=40628, avg=27869.37, stdev=1301.26 00:29:05.062 lat (usec): min=13539, max=40641, avg=27890.10, stdev=1300.93 00:29:05.062 clat percentiles (usec): 00:29:05.062 | 1.00th=[25297], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.062 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.062 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.062 | 99.00th=[29492], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:29:05.062 | 99.99th=[40633] 00:29:05.062 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:29:05.062 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:29:05.062 lat (msec) : 20=0.58%, 50=99.42% 00:29:05.062 cpu : usr=98.94%, sys=0.69%, ctx=15, majf=0, minf=33 00:29:05.062 IO depths : 1=6.0%, 2=12.0%, 4=24.5%, 8=51.0%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:05.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.062 filename2: (groupid=0, jobs=1): err= 0: pid=1185065: Mon Jul 15 23:54:52 2024 00:29:05.062 read: IOPS=579, BW=2316KiB/s (2372kB/s)(22.6MiB/10002msec) 00:29:05.062 slat (nsec): min=4206, max=39807, avg=10569.24, stdev=3820.25 00:29:05.062 clat (usec): min=3448, max=31524, avg=27531.75, stdev=2613.80 00:29:05.062 lat (usec): min=3463, max=31538, avg=27542.31, stdev=2613.69 00:29:05.062 clat percentiles (usec): 00:29:05.062 | 1.00th=[11469], 5.00th=[27395], 10.00th=[27657], 20.00th=[27657], 00:29:05.062 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:29:05.062 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.062 | 99.00th=[28967], 99.50th=[29492], 99.90th=[29754], 99.95th=[29754], 00:29:05.062 | 99.99th=[31589] 00:29:05.062 bw ( KiB/s): min= 2176, max= 2688, per=4.24%, avg=2317.47, stdev=94.40, samples=19 00:29:05.062 iops : min= 544, max= 672, avg=579.37, stdev=23.60, samples=19 00:29:05.062 lat (msec) : 4=0.28%, 10=0.55%, 20=1.97%, 50=97.20% 00:29:05.062 cpu : usr=98.72%, sys=0.89%, ctx=13, majf=0, minf=25 00:29:05.062 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:29:05.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 issued rwts: total=5792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.062 filename2: (groupid=0, jobs=1): err= 0: pid=1185066: Mon Jul 15 23:54:52 2024 00:29:05.062 read: IOPS=571, BW=2284KiB/s (2339kB/s)(22.3MiB/10002msec) 00:29:05.062 slat (nsec): min=6987, max=80915, avg=26924.79, stdev=16803.37 00:29:05.062 clat (usec): min=6757, max=75524, avg=27772.43, stdev=2320.47 00:29:05.062 lat (usec): min=6803, max=75541, avg=27799.36, stdev=2319.46 00:29:05.062 clat percentiles (usec): 00:29:05.062 | 1.00th=[18482], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:29:05.062 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:29:05.062 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:29:05.062 | 99.00th=[30540], 99.50th=[39060], 99.90th=[55313], 99.95th=[55313], 00:29:05.062 | 99.99th=[76022] 00:29:05.062 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.26, stdev=53.20, samples=19 00:29:05.062 iops : min= 544, max= 576, avg=569.32, stdev=13.30, samples=19 00:29:05.062 lat (msec) : 10=0.04%, 20=1.02%, 50=98.67%, 100=0.28% 00:29:05.062 cpu : usr=98.81%, sys=0.82%, ctx=11, majf=0, minf=34 00:29:05.062 IO depths : 1=5.7%, 2=11.7%, 4=24.4%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:29:05.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.062 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.062 00:29:05.062 Run status group 0 (all jobs): 00:29:05.062 READ: bw=53.4MiB/s (56.0MB/s), 2263KiB/s-2346KiB/s (2317kB/s-2402kB/s), io=536MiB (563MB), run=10001-10044msec 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 bdev_null0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 [2024-07-15 23:54:52.591098] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 bdev_null1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local sanitizers 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # shift 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local asan_lib= 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.062 { 00:29:05.062 "params": { 00:29:05.062 "name": "Nvme$subsystem", 00:29:05.062 "trtype": "$TEST_TRANSPORT", 00:29:05.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.062 "adrfam": "ipv4", 00:29:05.062 "trsvcid": "$NVMF_PORT", 00:29:05.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.062 "hdgst": ${hdgst:-false}, 00:29:05.062 "ddgst": ${ddgst:-false} 00:29:05.062 }, 00:29:05.062 "method": "bdev_nvme_attach_controller" 00:29:05.062 } 00:29:05.062 EOF 00:29:05.062 )") 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libasan 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.062 { 00:29:05.062 "params": { 00:29:05.062 "name": "Nvme$subsystem", 00:29:05.062 "trtype": "$TEST_TRANSPORT", 00:29:05.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.062 "adrfam": "ipv4", 00:29:05.062 "trsvcid": "$NVMF_PORT", 00:29:05.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.062 "hdgst": ${hdgst:-false}, 00:29:05.062 "ddgst": ${ddgst:-false} 00:29:05.062 }, 00:29:05.062 "method": "bdev_nvme_attach_controller" 00:29:05.062 } 00:29:05.062 EOF 00:29:05.062 )") 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:05.062 "params": { 00:29:05.062 "name": "Nvme0", 00:29:05.062 "trtype": "tcp", 00:29:05.062 "traddr": "10.0.0.2", 00:29:05.062 "adrfam": "ipv4", 00:29:05.062 "trsvcid": "4420", 00:29:05.062 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:05.062 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:05.062 "hdgst": false, 00:29:05.062 "ddgst": false 00:29:05.062 }, 00:29:05.062 "method": "bdev_nvme_attach_controller" 00:29:05.062 },{ 00:29:05.062 "params": { 00:29:05.062 "name": "Nvme1", 00:29:05.062 "trtype": "tcp", 00:29:05.062 "traddr": "10.0.0.2", 00:29:05.062 "adrfam": "ipv4", 00:29:05.062 "trsvcid": "4420", 00:29:05.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.062 "hdgst": false, 00:29:05.062 "ddgst": false 00:29:05.062 }, 00:29:05.062 "method": "bdev_nvme_attach_controller" 00:29:05.062 }' 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # asan_lib= 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:05.062 23:54:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.062 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:05.063 ... 00:29:05.063 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:05.063 ... 00:29:05.063 fio-3.35 00:29:05.063 Starting 4 threads 00:29:10.318 00:29:10.318 filename0: (groupid=0, jobs=1): err= 0: pid=1186951: Mon Jul 15 23:54:58 2024 00:29:10.318 read: IOPS=2695, BW=21.1MiB/s (22.1MB/s)(105MiB/5003msec) 00:29:10.318 slat (nsec): min=2943, max=24866, avg=9283.14, stdev=2796.45 00:29:10.318 clat (usec): min=1362, max=7083, avg=2941.76, stdev=490.25 00:29:10.318 lat (usec): min=1368, max=7093, avg=2951.04, stdev=490.24 00:29:10.318 clat percentiles (usec): 00:29:10.318 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:29:10.318 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2933], 00:29:10.318 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3851], 95.00th=[ 4047], 00:29:10.318 | 99.00th=[ 4424], 99.50th=[ 4555], 99.90th=[ 4686], 99.95th=[ 6980], 00:29:10.318 | 99.99th=[ 7046] 00:29:10.318 bw ( KiB/s): min=20304, max=22480, per=25.73%, avg=21564.44, stdev=871.55, samples=9 00:29:10.318 iops : min= 2538, max= 2810, avg=2695.56, stdev=108.94, samples=9 00:29:10.318 lat (msec) : 2=0.36%, 4=93.69%, 10=5.95% 00:29:10.318 cpu : usr=96.18%, sys=3.48%, ctx=7, majf=0, minf=0 00:29:10.318 IO depths : 1=0.1%, 2=0.7%, 4=71.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.318 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.318 issued rwts: total=13487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.318 filename0: (groupid=0, jobs=1): err= 0: pid=1186952: Mon Jul 15 23:54:58 2024 00:29:10.318 read: IOPS=2700, BW=21.1MiB/s (22.1MB/s)(106MiB/5002msec) 00:29:10.318 slat (nsec): min=4259, max=25468, avg=9218.76, stdev=2855.33 00:29:10.318 clat (usec): min=1173, max=6877, avg=2935.91, stdev=446.18 00:29:10.318 lat (usec): min=1181, max=6890, avg=2945.13, stdev=446.03 00:29:10.318 clat percentiles (usec): 00:29:10.318 | 1.00th=[ 2245], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:29:10.318 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2966], 00:29:10.318 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3720], 95.00th=[ 3884], 00:29:10.318 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 4752], 99.95th=[ 5538], 00:29:10.318 | 99.99th=[ 6849] 00:29:10.318 bw ( KiB/s): min=20272, max=22432, per=25.77%, avg=21597.78, stdev=825.42, samples=9 00:29:10.318 iops : min= 2534, max= 2804, avg=2699.67, stdev=103.15, samples=9 00:29:10.318 lat (msec) : 2=0.48%, 4=95.91%, 10=3.61% 00:29:10.318 cpu : usr=95.78%, sys=3.86%, ctx=8, majf=0, minf=0 00:29:10.318 IO depths : 1=0.1%, 2=0.4%, 4=72.6%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.318 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.318 issued rwts: total=13509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.318 filename1: (groupid=0, jobs=1): err= 0: pid=1186953: Mon Jul 15 23:54:58 2024 00:29:10.318 read: IOPS=2489, BW=19.4MiB/s (20.4MB/s)(97.2MiB/5001msec) 00:29:10.318 slat (nsec): min=6174, max=26699, avg=9180.69, stdev=2794.10 00:29:10.318 clat (usec): min=1391, max=44136, avg=3190.89, stdev=1124.92 00:29:10.318 lat (usec): min=1398, max=44161, avg=3200.08, stdev=1124.85 00:29:10.318 clat percentiles (usec): 00:29:10.318 | 1.00th=[ 2442], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2868], 00:29:10.318 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3163], 00:29:10.318 | 70.00th=[ 3294], 80.00th=[ 3326], 90.00th=[ 3458], 95.00th=[ 4359], 00:29:10.318 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 5014], 99.95th=[44303], 00:29:10.318 | 99.99th=[44303] 00:29:10.318 bw ( KiB/s): min=18384, max=21472, per=23.76%, avg=19909.33, stdev=838.21, samples=9 00:29:10.318 iops : min= 2298, max= 2684, avg=2488.67, stdev=104.78, samples=9 00:29:10.318 lat (msec) : 2=0.16%, 4=91.00%, 10=8.77%, 50=0.06% 00:29:10.318 cpu : usr=96.14%, sys=3.54%, ctx=10, majf=0, minf=0 00:29:10.318 IO depths : 1=0.1%, 2=0.3%, 4=68.8%, 8=30.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.318 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.318 issued rwts: total=12448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.318 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.318 filename1: (groupid=0, jobs=1): err= 0: pid=1186954: Mon Jul 15 23:54:58 2024 00:29:10.318 read: IOPS=2592, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:29:10.318 slat (nsec): min=6185, max=32089, avg=9503.69, stdev=3056.35 00:29:10.318 clat (usec): min=955, max=5149, avg=3061.82, stdev=299.88 00:29:10.318 lat (usec): min=961, max=5172, avg=3071.32, stdev=300.09 00:29:10.318 clat percentiles (usec): 00:29:10.318 | 1.00th=[ 2409], 5.00th=[ 2671], 10.00th=[ 2769], 20.00th=[ 2835], 00:29:10.318 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:29:10.318 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3326], 95.00th=[ 3326], 00:29:10.318 | 99.00th=[ 4113], 99.50th=[ 4621], 99.90th=[ 4752], 99.95th=[ 4817], 00:29:10.318 | 99.99th=[ 5145] 00:29:10.318 bw ( KiB/s): min=19504, max=21888, per=24.78%, avg=20764.44, stdev=1017.28, samples=9 00:29:10.318 iops : min= 2438, max= 2736, avg=2595.56, stdev=127.16, samples=9 00:29:10.318 lat (usec) : 1000=0.02% 00:29:10.318 lat (msec) : 2=0.32%, 4=98.04%, 10=1.62% 00:29:10.319 cpu : usr=96.48%, sys=3.20%, ctx=9, majf=0, minf=9 00:29:10.319 IO depths : 1=0.2%, 2=1.1%, 4=67.2%, 8=31.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.319 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.319 issued rwts: total=12963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.319 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.319 00:29:10.319 Run status group 0 (all jobs): 00:29:10.319 READ: bw=81.8MiB/s (85.8MB/s), 19.4MiB/s-21.1MiB/s (20.4MB/s-22.1MB/s), io=409MiB (429MB), run=5001-5003msec 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 00:29:10.319 real 0m24.085s 00:29:10.319 user 4m52.070s 00:29:10.319 sys 0m4.151s 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 ************************************ 00:29:10.319 END TEST fio_dif_rand_params 00:29:10.319 ************************************ 00:29:10.319 23:54:58 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:29:10.319 23:54:58 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:10.319 23:54:58 nvmf_dif -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:29:10.319 23:54:58 nvmf_dif -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 ************************************ 00:29:10.319 START TEST fio_dif_digest 00:29:10.319 ************************************ 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1117 -- # fio_dif_digest 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 bdev_null0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.319 [2024-07-15 23:54:58.900275] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local sanitizers 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # shift 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local asan_lib= 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.319 { 00:29:10.319 "params": { 00:29:10.319 "name": "Nvme$subsystem", 00:29:10.319 "trtype": "$TEST_TRANSPORT", 00:29:10.319 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.319 "adrfam": "ipv4", 00:29:10.319 "trsvcid": "$NVMF_PORT", 00:29:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.319 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.319 "hdgst": ${hdgst:-false}, 00:29:10.319 "ddgst": ${ddgst:-false} 00:29:10.319 }, 00:29:10.319 "method": "bdev_nvme_attach_controller" 00:29:10.319 } 00:29:10.319 EOF 00:29:10.319 )") 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libasan 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:10.319 "params": { 00:29:10.319 "name": "Nvme0", 00:29:10.319 "trtype": "tcp", 00:29:10.319 "traddr": "10.0.0.2", 00:29:10.319 "adrfam": "ipv4", 00:29:10.319 "trsvcid": "4420", 00:29:10.319 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.319 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.319 "hdgst": true, 00:29:10.319 "ddgst": true 00:29:10.319 }, 00:29:10.319 "method": "bdev_nvme_attach_controller" 00:29:10.319 }' 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # asan_lib= 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:10.319 23:54:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.319 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:10.319 ... 00:29:10.319 fio-3.35 00:29:10.319 Starting 3 threads 00:29:22.668 00:29:22.668 filename0: (groupid=0, jobs=1): err= 0: pid=1188109: Mon Jul 15 23:55:09 2024 00:29:22.668 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(346MiB/10047msec) 00:29:22.668 slat (nsec): min=6476, max=51543, avg=11898.40, stdev=2154.43 00:29:22.668 clat (usec): min=6629, max=52661, avg=10870.56, stdev=1413.26 00:29:22.668 lat (usec): min=6641, max=52674, avg=10882.45, stdev=1413.29 00:29:22.668 clat percentiles (usec): 00:29:22.668 | 1.00th=[ 7701], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10290], 00:29:22.668 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:29:22.668 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:29:22.668 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13698], 99.95th=[49021], 00:29:22.668 | 99.99th=[52691] 00:29:22.668 bw ( KiB/s): min=34304, max=37888, per=34.15%, avg=35366.40, stdev=802.16, samples=20 00:29:22.668 iops : min= 268, max= 296, avg=276.30, stdev= 6.27, samples=20 00:29:22.668 lat (msec) : 10=13.67%, 20=86.26%, 50=0.04%, 100=0.04% 00:29:22.668 cpu : usr=94.54%, sys=5.15%, ctx=13, majf=0, minf=152 00:29:22.668 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:22.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.668 issued rwts: total=2765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:22.668 filename0: (groupid=0, jobs=1): err= 0: pid=1188110: Mon Jul 15 23:55:09 2024 00:29:22.668 read: IOPS=275, BW=34.5MiB/s (36.2MB/s)(347MiB/10045msec) 00:29:22.668 slat (nsec): min=6455, max=27605, avg=11564.16, stdev=2108.25 00:29:22.668 clat (usec): min=6628, max=53801, avg=10841.42, stdev=1932.19 00:29:22.668 lat (usec): min=6640, max=53829, avg=10852.99, stdev=1932.31 00:29:22.668 clat percentiles (usec): 00:29:22.668 | 1.00th=[ 7898], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:29:22.668 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:29:22.668 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:29:22.668 | 99.00th=[13173], 99.50th=[13566], 99.90th=[52691], 99.95th=[52691], 00:29:22.668 | 99.99th=[53740] 00:29:22.668 bw ( KiB/s): min=31488, max=38144, per=34.24%, avg=35456.00, stdev=1641.30, samples=20 00:29:22.668 iops : min= 246, max= 298, avg=277.00, stdev=12.82, samples=20 00:29:22.668 lat (msec) : 10=19.05%, 20=80.77%, 50=0.07%, 100=0.11% 00:29:22.668 cpu : usr=94.78%, sys=4.90%, ctx=17, majf=0, minf=119 00:29:22.668 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:22.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.668 issued rwts: total=2772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:22.668 filename0: (groupid=0, jobs=1): err= 0: pid=1188111: Mon Jul 15 23:55:09 2024 00:29:22.668 read: IOPS=258, BW=32.3MiB/s (33.8MB/s)(324MiB/10044msec) 00:29:22.668 slat (nsec): min=6500, max=43664, avg=11633.75, stdev=2171.95 00:29:22.668 clat (usec): min=7643, max=93891, avg=11594.59, stdev=3340.97 00:29:22.668 lat (usec): min=7656, max=93904, avg=11606.22, stdev=3341.01 00:29:22.668 clat percentiles (usec): 00:29:22.668 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:29:22.668 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:29:22.668 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[12780], 00:29:22.668 | 99.00th=[13566], 99.50th=[49021], 99.90th=[54264], 99.95th=[54789], 00:29:22.668 | 99.99th=[93848] 00:29:22.668 bw ( KiB/s): min=27392, max=34560, per=32.01%, avg=33155.30, stdev=1672.86, samples=20 00:29:22.668 iops : min= 214, max= 270, avg=259.00, stdev=13.07, samples=20 00:29:22.668 lat (msec) : 10=4.51%, 20=94.98%, 50=0.08%, 100=0.42% 00:29:22.668 cpu : usr=94.51%, sys=5.18%, ctx=21, majf=0, minf=120 00:29:22.668 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:22.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:22.668 issued rwts: total=2592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:22.668 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:22.668 00:29:22.668 Run status group 0 (all jobs): 00:29:22.668 READ: bw=101MiB/s (106MB/s), 32.3MiB/s-34.5MiB/s (33.8MB/s-36.2MB/s), io=1016MiB (1065MB), run=10044-10047msec 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:22.668 00:29:22.668 real 0m11.279s 00:29:22.668 user 0m35.251s 00:29:22.668 sys 0m1.873s 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:22.668 23:55:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:22.668 ************************************ 00:29:22.668 END TEST fio_dif_digest 00:29:22.669 ************************************ 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@1136 -- # return 0 00:29:22.669 23:55:10 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:22.669 23:55:10 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:22.669 rmmod nvme_tcp 00:29:22.669 rmmod nvme_fabrics 00:29:22.669 rmmod nvme_keyring 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1179510 ']' 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1179510 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@942 -- # '[' -z 1179510 ']' 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@946 -- # kill -0 1179510 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@947 -- # uname 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1179510 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1179510' 00:29:22.669 killing process with pid 1179510 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@961 -- # kill 1179510 00:29:22.669 23:55:10 nvmf_dif -- common/autotest_common.sh@966 -- # wait 1179510 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:22.669 23:55:10 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:24.039 Waiting for block devices as requested 00:29:24.039 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:24.039 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:24.039 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:24.039 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:24.039 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:24.297 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:24.297 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:24.297 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:24.297 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:24.554 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:24.554 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:24.554 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:24.554 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:24.811 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:24.811 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:24.811 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:24.811 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:25.069 23:55:13 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:25.069 23:55:13 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:25.069 23:55:13 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.069 23:55:13 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:25.069 23:55:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.069 23:55:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:25.069 23:55:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.970 23:55:15 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:26.970 00:29:26.970 real 1m12.151s 00:29:26.970 user 7m9.573s 00:29:26.970 sys 0m17.607s 00:29:26.970 23:55:15 nvmf_dif -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:26.970 23:55:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:26.970 ************************************ 00:29:26.970 END TEST nvmf_dif 00:29:26.970 ************************************ 00:29:27.229 23:55:15 -- common/autotest_common.sh@1136 -- # return 0 00:29:27.229 23:55:15 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:27.229 23:55:15 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:29:27.229 23:55:15 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:27.229 23:55:15 -- common/autotest_common.sh@10 -- # set +x 00:29:27.229 ************************************ 00:29:27.229 START TEST nvmf_abort_qd_sizes 00:29:27.229 ************************************ 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:27.229 * Looking for test storage... 00:29:27.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:27.229 23:55:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:32.492 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:32.492 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:32.492 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:32.493 Found net devices under 0000:86:00.0: cvl_0_0 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:32.493 Found net devices under 0000:86:00.1: cvl_0_1 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:32.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:29:32.493 00:29:32.493 --- 10.0.0.2 ping statistics --- 00:29:32.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.493 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:29:32.493 00:29:32.493 --- 10.0.0.1 ping statistics --- 00:29:32.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.493 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:32.493 23:55:20 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:34.395 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:34.395 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:34.395 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:34.395 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:34.395 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:34.395 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:34.654 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:35.588 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1195648 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1195648 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@823 -- # '[' -z 1195648 ']' 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # local max_retries=100 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # xtrace_disable 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:35.588 23:55:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:35.588 [2024-07-15 23:55:24.470678] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:29:35.588 [2024-07-15 23:55:24.470721] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.588 [2024-07-15 23:55:24.527981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.847 [2024-07-15 23:55:24.611293] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.847 [2024-07-15 23:55:24.611328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.847 [2024-07-15 23:55:24.611335] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.847 [2024-07-15 23:55:24.611341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.847 [2024-07-15 23:55:24.611347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.847 [2024-07-15 23:55:24.611391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.847 [2024-07-15 23:55:24.611409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.847 [2024-07-15 23:55:24.611497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.847 [2024-07-15 23:55:24.611498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # return 0 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:36.414 23:55:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:36.414 ************************************ 00:29:36.414 START TEST spdk_target_abort 00:29:36.414 ************************************ 00:29:36.414 23:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1117 -- # spdk_target 00:29:36.414 23:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:36.414 23:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:36.414 23:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:36.414 23:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:39.696 spdk_targetn1 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:39.696 [2024-07-15 23:55:28.188160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:39.696 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:39.696 [2024-07-15 23:55:28.221048] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:39.697 23:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:42.977 Initializing NVMe Controllers 00:29:42.977 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:42.977 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:42.977 Initialization complete. Launching workers. 00:29:42.977 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14252, failed: 0 00:29:42.977 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1494, failed to submit 12758 00:29:42.977 success 815, unsuccess 679, failed 0 00:29:42.977 23:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:42.977 23:55:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:46.248 Initializing NVMe Controllers 00:29:46.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:46.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:46.248 Initialization complete. Launching workers. 00:29:46.248 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8808, failed: 0 00:29:46.248 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1248, failed to submit 7560 00:29:46.248 success 315, unsuccess 933, failed 0 00:29:46.248 23:55:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:46.248 23:55:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:49.529 Initializing NVMe Controllers 00:29:49.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:49.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:49.529 Initialization complete. Launching workers. 00:29:49.529 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37502, failed: 0 00:29:49.529 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2818, failed to submit 34684 00:29:49.529 success 598, unsuccess 2220, failed 0 00:29:49.529 23:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:49.529 23:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:49.529 23:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.529 23:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:49.529 23:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:49.529 23:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:29:49.529 23:55:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1195648 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@942 -- # '[' -z 1195648 ']' 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # kill -0 1195648 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # uname 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1195648 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1195648' 00:29:50.467 killing process with pid 1195648 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@961 -- # kill 1195648 00:29:50.467 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # wait 1195648 00:29:50.727 00:29:50.727 real 0m14.138s 00:29:50.727 user 0m56.289s 00:29:50.727 sys 0m2.311s 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:50.727 ************************************ 00:29:50.727 END TEST spdk_target_abort 00:29:50.727 ************************************ 00:29:50.727 23:55:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:29:50.727 23:55:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:50.727 23:55:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:29:50.727 23:55:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # xtrace_disable 00:29:50.727 23:55:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:50.727 ************************************ 00:29:50.727 START TEST kernel_target_abort 00:29:50.727 ************************************ 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1117 -- # kernel_target 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:50.727 23:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:52.667 Waiting for block devices as requested 00:29:52.924 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:52.924 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:52.924 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:53.183 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:53.183 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:53.183 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:53.183 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:53.441 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:53.441 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:53.441 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:53.441 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:53.698 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:53.698 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:53.698 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:53.957 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:53.957 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:53.957 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:53.957 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:54.216 No valid GPT data, bailing 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:54.216 23:55:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:54.216 00:29:54.216 Discovery Log Number of Records 2, Generation counter 2 00:29:54.216 =====Discovery Log Entry 0====== 00:29:54.216 trtype: tcp 00:29:54.216 adrfam: ipv4 00:29:54.216 subtype: current discovery subsystem 00:29:54.216 treq: not specified, sq flow control disable supported 00:29:54.216 portid: 1 00:29:54.216 trsvcid: 4420 00:29:54.216 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:54.216 traddr: 10.0.0.1 00:29:54.216 eflags: none 00:29:54.216 sectype: none 00:29:54.216 =====Discovery Log Entry 1====== 00:29:54.216 trtype: tcp 00:29:54.216 adrfam: ipv4 00:29:54.216 subtype: nvme subsystem 00:29:54.216 treq: not specified, sq flow control disable supported 00:29:54.216 portid: 1 00:29:54.216 trsvcid: 4420 00:29:54.216 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:54.216 traddr: 10.0.0.1 00:29:54.216 eflags: none 00:29:54.216 sectype: none 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:54.216 23:55:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:57.497 Initializing NVMe Controllers 00:29:57.497 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:57.497 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:57.497 Initialization complete. Launching workers. 00:29:57.497 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73231, failed: 0 00:29:57.497 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 73231, failed to submit 0 00:29:57.497 success 0, unsuccess 73231, failed 0 00:29:57.497 23:55:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:57.497 23:55:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:00.780 Initializing NVMe Controllers 00:30:00.780 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:00.780 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:00.780 Initialization complete. Launching workers. 00:30:00.780 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 124163, failed: 0 00:30:00.780 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31150, failed to submit 93013 00:30:00.780 success 0, unsuccess 31150, failed 0 00:30:00.780 23:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:00.780 23:55:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:04.062 Initializing NVMe Controllers 00:30:04.062 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:04.062 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:04.062 Initialization complete. Launching workers. 00:30:04.062 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 118210, failed: 0 00:30:04.062 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29558, failed to submit 88652 00:30:04.062 success 0, unsuccess 29558, failed 0 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:04.062 23:55:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:05.958 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:05.959 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:05.959 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:05.959 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:05.959 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:05.959 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:05.959 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:05.959 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:06.216 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:07.153 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:07.153 00:30:07.153 real 0m16.359s 00:30:07.153 user 0m7.302s 00:30:07.153 sys 0m4.729s 00:30:07.153 23:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:07.153 23:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:07.153 ************************************ 00:30:07.153 END TEST kernel_target_abort 00:30:07.153 ************************************ 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1136 -- # return 0 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:07.153 23:55:55 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:07.153 rmmod nvme_tcp 00:30:07.153 rmmod nvme_fabrics 00:30:07.153 rmmod nvme_keyring 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1195648 ']' 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1195648 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@942 -- # '[' -z 1195648 ']' 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # kill -0 1195648 00:30:07.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1195648) - No such process 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@969 -- # echo 'Process with pid 1195648 is not found' 00:30:07.153 Process with pid 1195648 is not found 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:07.153 23:55:56 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:09.051 Waiting for block devices as requested 00:30:09.310 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:09.310 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:09.310 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:09.568 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:09.568 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:09.568 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:09.568 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:09.826 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:09.826 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:09.826 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:09.826 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:10.084 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:10.084 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:10.084 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:10.342 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:10.342 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:10.342 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:10.342 23:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:10.342 23:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:10.342 23:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:10.342 23:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:10.342 23:55:59 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.342 23:55:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:10.342 23:55:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.872 23:56:01 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.872 00:30:12.872 real 0m45.371s 00:30:12.872 user 1m7.004s 00:30:12.872 sys 0m14.214s 00:30:12.872 23:56:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:12.872 23:56:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:12.872 ************************************ 00:30:12.872 END TEST nvmf_abort_qd_sizes 00:30:12.872 ************************************ 00:30:12.872 23:56:01 -- common/autotest_common.sh@1136 -- # return 0 00:30:12.872 23:56:01 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:12.872 23:56:01 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:30:12.872 23:56:01 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:12.872 23:56:01 -- common/autotest_common.sh@10 -- # set +x 00:30:12.872 ************************************ 00:30:12.872 START TEST keyring_file 00:30:12.872 ************************************ 00:30:12.872 23:56:01 keyring_file -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:12.872 * Looking for test storage... 00:30:12.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:12.872 23:56:01 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:12.872 23:56:01 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.872 23:56:01 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.872 23:56:01 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.872 23:56:01 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.872 23:56:01 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.872 23:56:01 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.872 23:56:01 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.872 23:56:01 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:12.872 23:56:01 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:12.872 23:56:01 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SBIOOSUZGH 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SBIOOSUZGH 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SBIOOSUZGH 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.SBIOOSUZGH 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0I7zo42uIn 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:12.873 23:56:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0I7zo42uIn 00:30:12.873 23:56:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0I7zo42uIn 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0I7zo42uIn 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@30 -- # tgtpid=1204226 00:30:12.873 23:56:01 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1204226 00:30:12.873 23:56:01 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1204226 ']' 00:30:12.873 23:56:01 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.873 23:56:01 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:12.873 23:56:01 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.873 23:56:01 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:12.873 23:56:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:12.873 [2024-07-15 23:56:01.692115] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:30:12.873 [2024-07-15 23:56:01.692165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204226 ] 00:30:12.873 [2024-07-15 23:56:01.747705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.873 [2024-07-15 23:56:01.822076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:30:13.889 23:56:02 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:13.889 [2024-07-15 23:56:02.513429] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.889 null0 00:30:13.889 [2024-07-15 23:56:02.545480] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:13.889 [2024-07-15 23:56:02.545704] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:13.889 [2024-07-15 23:56:02.553488] tcp.c:3693:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:13.889 23:56:02 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@645 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:13.889 23:56:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:13.889 [2024-07-15 23:56:02.569537] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:13.889 request: 00:30:13.889 { 00:30:13.889 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:13.889 "secure_channel": false, 00:30:13.889 "listen_address": { 00:30:13.889 "trtype": "tcp", 00:30:13.889 "traddr": "127.0.0.1", 00:30:13.890 "trsvcid": "4420" 00:30:13.890 }, 00:30:13.890 "method": "nvmf_subsystem_add_listener", 00:30:13.890 "req_id": 1 00:30:13.890 } 00:30:13.890 Got JSON-RPC error response 00:30:13.890 response: 00:30:13.890 { 00:30:13.890 "code": -32602, 00:30:13.890 "message": "Invalid parameters" 00:30:13.890 } 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:30:13.890 23:56:02 keyring_file -- keyring/file.sh@46 -- # bperfpid=1204450 00:30:13.890 23:56:02 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1204450 /var/tmp/bperf.sock 00:30:13.890 23:56:02 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1204450 ']' 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:13.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:13.890 23:56:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:13.890 [2024-07-15 23:56:02.623238] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:30:13.890 [2024-07-15 23:56:02.623280] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204450 ] 00:30:13.890 [2024-07-15 23:56:02.677568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.890 [2024-07-15 23:56:02.757269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.454 23:56:03 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:14.454 23:56:03 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:30:14.454 23:56:03 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:14.454 23:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:14.711 23:56:03 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0I7zo42uIn 00:30:14.711 23:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0I7zo42uIn 00:30:14.968 23:56:03 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:14.968 23:56:03 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:14.968 23:56:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.968 23:56:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.968 23:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.225 23:56:03 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.SBIOOSUZGH == \/\t\m\p\/\t\m\p\.\S\B\I\O\O\S\U\Z\G\H ]] 00:30:15.225 23:56:03 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:15.226 23:56:03 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:15.226 23:56:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:15.226 23:56:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:15.226 23:56:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.226 23:56:04 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0I7zo42uIn == \/\t\m\p\/\t\m\p\.\0\I\7\z\o\4\2\u\I\n ]] 00:30:15.226 23:56:04 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:15.226 23:56:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:15.226 23:56:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:15.226 23:56:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:15.226 23:56:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:15.226 23:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.484 23:56:04 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:15.484 23:56:04 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:15.484 23:56:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:15.484 23:56:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:15.484 23:56:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:15.484 23:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:15.484 23:56:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:15.744 23:56:04 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:15.744 23:56:04 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.744 23:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:15.744 [2024-07-15 23:56:04.626703] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:15.744 nvme0n1 00:30:16.004 23:56:04 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.004 23:56:04 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:16.004 23:56:04 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.004 23:56:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:16.263 23:56:05 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:16.263 23:56:05 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:16.263 Running I/O for 1 seconds... 00:30:17.641 00:30:17.641 Latency(us) 00:30:17.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.641 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:17.641 nvme0n1 : 1.01 12021.23 46.96 0.00 0.00 10588.08 6781.55 18008.15 00:30:17.641 =================================================================================================================== 00:30:17.641 Total : 12021.23 46.96 0.00 0.00 10588.08 6781.55 18008.15 00:30:17.641 0 00:30:17.641 23:56:06 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:17.641 23:56:06 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.641 23:56:06 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:17.641 23:56:06 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:17.641 23:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.900 23:56:06 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:17.900 23:56:06 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:17.900 23:56:06 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:30:17.900 23:56:06 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:17.900 23:56:06 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:30:17.900 23:56:06 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:17.900 23:56:06 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:30:17.900 23:56:06 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:17.901 23:56:06 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:17.901 23:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:18.160 [2024-07-15 23:56:06.892206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:18.160 [2024-07-15 23:56:06.892962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cf770 (107): Transport endpoint is not connected 00:30:18.160 [2024-07-15 23:56:06.893956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cf770 (9): Bad file descriptor 00:30:18.160 [2024-07-15 23:56:06.894957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:18.160 [2024-07-15 23:56:06.894966] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:18.160 [2024-07-15 23:56:06.894973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:18.160 request: 00:30:18.160 { 00:30:18.160 "name": "nvme0", 00:30:18.160 "trtype": "tcp", 00:30:18.160 "traddr": "127.0.0.1", 00:30:18.160 "adrfam": "ipv4", 00:30:18.160 "trsvcid": "4420", 00:30:18.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:18.160 "prchk_reftag": false, 00:30:18.160 "prchk_guard": false, 00:30:18.160 "hdgst": false, 00:30:18.160 "ddgst": false, 00:30:18.160 "psk": "key1", 00:30:18.160 "method": "bdev_nvme_attach_controller", 00:30:18.160 "req_id": 1 00:30:18.160 } 00:30:18.160 Got JSON-RPC error response 00:30:18.160 response: 00:30:18.160 { 00:30:18.160 "code": -5, 00:30:18.160 "message": "Input/output error" 00:30:18.160 } 00:30:18.160 23:56:06 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:30:18.160 23:56:06 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:30:18.160 23:56:06 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:30:18.160 23:56:06 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:30:18.160 23:56:06 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:18.160 23:56:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:18.160 23:56:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.160 23:56:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.160 23:56:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.160 23:56:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.160 23:56:07 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:18.160 23:56:07 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:18.160 23:56:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.160 23:56:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:18.160 23:56:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.160 23:56:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:18.160 23:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.420 23:56:07 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:18.420 23:56:07 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:18.420 23:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:18.679 23:56:07 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:18.679 23:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:18.679 23:56:07 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:18.679 23:56:07 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:18.679 23:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.938 23:56:07 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:18.938 23:56:07 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.SBIOOSUZGH 00:30:18.938 23:56:07 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:18.938 23:56:07 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:30:18.938 23:56:07 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:18.938 23:56:07 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:30:18.938 23:56:07 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:18.938 23:56:07 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:30:18.938 23:56:07 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:18.938 23:56:07 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:18.938 23:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:19.197 [2024-07-15 23:56:07.952042] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.SBIOOSUZGH': 0100660 00:30:19.197 [2024-07-15 23:56:07.952065] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:19.197 request: 00:30:19.197 { 00:30:19.197 "name": "key0", 00:30:19.197 "path": "/tmp/tmp.SBIOOSUZGH", 00:30:19.197 "method": "keyring_file_add_key", 00:30:19.197 "req_id": 1 00:30:19.197 } 00:30:19.197 Got JSON-RPC error response 00:30:19.197 response: 00:30:19.197 { 00:30:19.198 "code": -1, 00:30:19.198 "message": "Operation not permitted" 00:30:19.198 } 00:30:19.198 23:56:07 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:30:19.198 23:56:07 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:30:19.198 23:56:07 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:30:19.198 23:56:07 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:30:19.198 23:56:07 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.SBIOOSUZGH 00:30:19.198 23:56:07 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:19.198 23:56:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SBIOOSUZGH 00:30:19.198 23:56:08 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.SBIOOSUZGH 00:30:19.198 23:56:08 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:19.198 23:56:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:19.198 23:56:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.198 23:56:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.198 23:56:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.198 23:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.457 23:56:08 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:19.457 23:56:08 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.457 23:56:08 keyring_file -- common/autotest_common.sh@642 -- # local es=0 00:30:19.457 23:56:08 keyring_file -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.457 23:56:08 keyring_file -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:30:19.457 23:56:08 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:19.457 23:56:08 keyring_file -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:30:19.457 23:56:08 keyring_file -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:19.457 23:56:08 keyring_file -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.457 23:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.717 [2024-07-15 23:56:08.477433] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.SBIOOSUZGH': No such file or directory 00:30:19.717 [2024-07-15 23:56:08.477451] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:19.717 [2024-07-15 23:56:08.477471] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:19.717 [2024-07-15 23:56:08.477476] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:19.717 [2024-07-15 23:56:08.477482] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:19.717 request: 00:30:19.717 { 00:30:19.717 "name": "nvme0", 00:30:19.717 "trtype": "tcp", 00:30:19.717 "traddr": "127.0.0.1", 00:30:19.717 "adrfam": "ipv4", 00:30:19.717 "trsvcid": "4420", 00:30:19.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:19.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:19.717 "prchk_reftag": false, 00:30:19.717 "prchk_guard": false, 00:30:19.717 "hdgst": false, 00:30:19.717 "ddgst": false, 00:30:19.717 "psk": "key0", 00:30:19.717 "method": "bdev_nvme_attach_controller", 00:30:19.717 "req_id": 1 00:30:19.717 } 00:30:19.717 Got JSON-RPC error response 00:30:19.717 response: 00:30:19.717 { 00:30:19.717 "code": -19, 00:30:19.717 "message": "No such device" 00:30:19.717 } 00:30:19.717 23:56:08 keyring_file -- common/autotest_common.sh@645 -- # es=1 00:30:19.717 23:56:08 keyring_file -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:30:19.717 23:56:08 keyring_file -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:30:19.717 23:56:08 keyring_file -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:30:19.717 23:56:08 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:19.717 23:56:08 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2OVKC9HFwr 00:30:19.717 23:56:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:19.717 23:56:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:19.717 23:56:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:19.717 23:56:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:19.717 23:56:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:19.717 23:56:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:19.717 23:56:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:19.976 23:56:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2OVKC9HFwr 00:30:19.976 23:56:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2OVKC9HFwr 00:30:19.976 23:56:08 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.2OVKC9HFwr 00:30:19.976 23:56:08 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2OVKC9HFwr 00:30:19.976 23:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2OVKC9HFwr 00:30:19.976 23:56:08 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.976 23:56:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:20.238 nvme0n1 00:30:20.238 23:56:09 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:20.238 23:56:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:20.238 23:56:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:20.238 23:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:20.238 23:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.238 23:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.498 23:56:09 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:20.498 23:56:09 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:20.498 23:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:20.757 23:56:09 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:20.757 23:56:09 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.757 23:56:09 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:20.757 23:56:09 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:20.757 23:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.016 23:56:09 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:21.016 23:56:09 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:21.016 23:56:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:21.275 23:56:10 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:21.275 23:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.275 23:56:10 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:21.275 23:56:10 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:21.275 23:56:10 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2OVKC9HFwr 00:30:21.275 23:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2OVKC9HFwr 00:30:21.535 23:56:10 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0I7zo42uIn 00:30:21.535 23:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0I7zo42uIn 00:30:21.795 23:56:10 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:21.795 23:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:21.795 nvme0n1 00:30:21.795 23:56:10 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:21.795 23:56:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:22.055 23:56:10 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:22.055 "subsystems": [ 00:30:22.055 { 00:30:22.055 "subsystem": "keyring", 00:30:22.055 "config": [ 00:30:22.055 { 00:30:22.055 "method": "keyring_file_add_key", 00:30:22.055 "params": { 00:30:22.055 "name": "key0", 00:30:22.055 "path": "/tmp/tmp.2OVKC9HFwr" 00:30:22.055 } 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "method": "keyring_file_add_key", 00:30:22.055 "params": { 00:30:22.055 "name": "key1", 00:30:22.055 "path": "/tmp/tmp.0I7zo42uIn" 00:30:22.055 } 00:30:22.055 } 00:30:22.055 ] 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "subsystem": "iobuf", 00:30:22.055 "config": [ 00:30:22.055 { 00:30:22.055 "method": "iobuf_set_options", 00:30:22.055 "params": { 00:30:22.055 "small_pool_count": 8192, 00:30:22.055 "large_pool_count": 1024, 00:30:22.055 "small_bufsize": 8192, 00:30:22.055 "large_bufsize": 135168 00:30:22.055 } 00:30:22.055 } 00:30:22.055 ] 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "subsystem": "sock", 00:30:22.055 "config": [ 00:30:22.055 { 00:30:22.055 "method": "sock_set_default_impl", 00:30:22.055 "params": { 00:30:22.055 "impl_name": "posix" 00:30:22.055 } 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "method": "sock_impl_set_options", 00:30:22.055 "params": { 00:30:22.055 "impl_name": "ssl", 00:30:22.055 "recv_buf_size": 4096, 00:30:22.055 "send_buf_size": 4096, 00:30:22.055 "enable_recv_pipe": true, 00:30:22.055 "enable_quickack": false, 00:30:22.055 "enable_placement_id": 0, 00:30:22.055 "enable_zerocopy_send_server": true, 00:30:22.055 "enable_zerocopy_send_client": false, 00:30:22.055 "zerocopy_threshold": 0, 00:30:22.055 "tls_version": 0, 00:30:22.055 "enable_ktls": false 00:30:22.055 } 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "method": "sock_impl_set_options", 00:30:22.055 "params": { 00:30:22.055 "impl_name": "posix", 00:30:22.055 "recv_buf_size": 2097152, 00:30:22.055 "send_buf_size": 2097152, 00:30:22.055 "enable_recv_pipe": true, 00:30:22.055 "enable_quickack": false, 00:30:22.055 "enable_placement_id": 0, 00:30:22.055 "enable_zerocopy_send_server": true, 00:30:22.055 "enable_zerocopy_send_client": false, 00:30:22.055 "zerocopy_threshold": 0, 00:30:22.055 "tls_version": 0, 00:30:22.055 "enable_ktls": false 00:30:22.055 } 00:30:22.055 } 00:30:22.055 ] 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "subsystem": "vmd", 00:30:22.055 "config": [] 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "subsystem": "accel", 00:30:22.055 "config": [ 00:30:22.055 { 00:30:22.055 "method": "accel_set_options", 00:30:22.055 "params": { 00:30:22.055 "small_cache_size": 128, 00:30:22.055 "large_cache_size": 16, 00:30:22.055 "task_count": 2048, 00:30:22.055 "sequence_count": 2048, 00:30:22.055 "buf_count": 2048 00:30:22.055 } 00:30:22.055 } 00:30:22.055 ] 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "subsystem": "bdev", 00:30:22.055 "config": [ 00:30:22.055 { 00:30:22.055 "method": "bdev_set_options", 00:30:22.055 "params": { 00:30:22.055 "bdev_io_pool_size": 65535, 00:30:22.055 "bdev_io_cache_size": 256, 00:30:22.055 "bdev_auto_examine": true, 00:30:22.055 "iobuf_small_cache_size": 128, 00:30:22.055 "iobuf_large_cache_size": 16 00:30:22.055 } 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "method": "bdev_raid_set_options", 00:30:22.055 "params": { 00:30:22.055 "process_window_size_kb": 1024 00:30:22.055 } 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "method": "bdev_iscsi_set_options", 00:30:22.055 "params": { 00:30:22.055 "timeout_sec": 30 00:30:22.055 } 00:30:22.055 }, 00:30:22.055 { 00:30:22.055 "method": "bdev_nvme_set_options", 00:30:22.055 "params": { 00:30:22.055 "action_on_timeout": "none", 00:30:22.055 "timeout_us": 0, 00:30:22.055 "timeout_admin_us": 0, 00:30:22.055 "keep_alive_timeout_ms": 10000, 00:30:22.055 "arbitration_burst": 0, 00:30:22.055 "low_priority_weight": 0, 00:30:22.055 "medium_priority_weight": 0, 00:30:22.055 "high_priority_weight": 0, 00:30:22.055 "nvme_adminq_poll_period_us": 10000, 00:30:22.055 "nvme_ioq_poll_period_us": 0, 00:30:22.055 "io_queue_requests": 512, 00:30:22.055 "delay_cmd_submit": true, 00:30:22.055 "transport_retry_count": 4, 00:30:22.055 "bdev_retry_count": 3, 00:30:22.055 "transport_ack_timeout": 0, 00:30:22.055 "ctrlr_loss_timeout_sec": 0, 00:30:22.055 "reconnect_delay_sec": 0, 00:30:22.055 "fast_io_fail_timeout_sec": 0, 00:30:22.055 "disable_auto_failback": false, 00:30:22.055 "generate_uuids": false, 00:30:22.055 "transport_tos": 0, 00:30:22.055 "nvme_error_stat": false, 00:30:22.055 "rdma_srq_size": 0, 00:30:22.055 "io_path_stat": false, 00:30:22.055 "allow_accel_sequence": false, 00:30:22.055 "rdma_max_cq_size": 0, 00:30:22.055 "rdma_cm_event_timeout_ms": 0, 00:30:22.055 "dhchap_digests": [ 00:30:22.055 "sha256", 00:30:22.056 "sha384", 00:30:22.056 "sha512" 00:30:22.056 ], 00:30:22.056 "dhchap_dhgroups": [ 00:30:22.056 "null", 00:30:22.056 "ffdhe2048", 00:30:22.056 "ffdhe3072", 00:30:22.056 "ffdhe4096", 00:30:22.056 "ffdhe6144", 00:30:22.056 "ffdhe8192" 00:30:22.056 ] 00:30:22.056 } 00:30:22.056 }, 00:30:22.056 { 00:30:22.056 "method": "bdev_nvme_attach_controller", 00:30:22.056 "params": { 00:30:22.056 "name": "nvme0", 00:30:22.056 "trtype": "TCP", 00:30:22.056 "adrfam": "IPv4", 00:30:22.056 "traddr": "127.0.0.1", 00:30:22.056 "trsvcid": "4420", 00:30:22.056 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.056 "prchk_reftag": false, 00:30:22.056 "prchk_guard": false, 00:30:22.056 "ctrlr_loss_timeout_sec": 0, 00:30:22.056 "reconnect_delay_sec": 0, 00:30:22.056 "fast_io_fail_timeout_sec": 0, 00:30:22.056 "psk": "key0", 00:30:22.056 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.056 "hdgst": false, 00:30:22.056 "ddgst": false 00:30:22.056 } 00:30:22.056 }, 00:30:22.056 { 00:30:22.056 "method": "bdev_nvme_set_hotplug", 00:30:22.056 "params": { 00:30:22.056 "period_us": 100000, 00:30:22.056 "enable": false 00:30:22.056 } 00:30:22.056 }, 00:30:22.056 { 00:30:22.056 "method": "bdev_wait_for_examine" 00:30:22.056 } 00:30:22.056 ] 00:30:22.056 }, 00:30:22.056 { 00:30:22.056 "subsystem": "nbd", 00:30:22.056 "config": [] 00:30:22.056 } 00:30:22.056 ] 00:30:22.056 }' 00:30:22.056 23:56:10 keyring_file -- keyring/file.sh@114 -- # killprocess 1204450 00:30:22.056 23:56:10 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1204450 ']' 00:30:22.056 23:56:10 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1204450 00:30:22.056 23:56:10 keyring_file -- common/autotest_common.sh@947 -- # uname 00:30:22.056 23:56:10 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:22.056 23:56:10 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1204450 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1204450' 00:30:22.316 killing process with pid 1204450 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@961 -- # kill 1204450 00:30:22.316 Received shutdown signal, test time was about 1.000000 seconds 00:30:22.316 00:30:22.316 Latency(us) 00:30:22.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.316 =================================================================================================================== 00:30:22.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@966 -- # wait 1204450 00:30:22.316 23:56:11 keyring_file -- keyring/file.sh@117 -- # bperfpid=1205962 00:30:22.316 23:56:11 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1205962 /var/tmp/bperf.sock 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@823 -- # '[' -z 1205962 ']' 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:22.316 23:56:11 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:22.316 23:56:11 keyring_file -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:22.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:22.316 23:56:11 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:22.316 "subsystems": [ 00:30:22.316 { 00:30:22.316 "subsystem": "keyring", 00:30:22.316 "config": [ 00:30:22.316 { 00:30:22.316 "method": "keyring_file_add_key", 00:30:22.316 "params": { 00:30:22.316 "name": "key0", 00:30:22.316 "path": "/tmp/tmp.2OVKC9HFwr" 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "keyring_file_add_key", 00:30:22.316 "params": { 00:30:22.316 "name": "key1", 00:30:22.316 "path": "/tmp/tmp.0I7zo42uIn" 00:30:22.316 } 00:30:22.316 } 00:30:22.316 ] 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "subsystem": "iobuf", 00:30:22.316 "config": [ 00:30:22.316 { 00:30:22.316 "method": "iobuf_set_options", 00:30:22.316 "params": { 00:30:22.316 "small_pool_count": 8192, 00:30:22.316 "large_pool_count": 1024, 00:30:22.316 "small_bufsize": 8192, 00:30:22.316 "large_bufsize": 135168 00:30:22.316 } 00:30:22.316 } 00:30:22.316 ] 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "subsystem": "sock", 00:30:22.316 "config": [ 00:30:22.316 { 00:30:22.316 "method": "sock_set_default_impl", 00:30:22.316 "params": { 00:30:22.316 "impl_name": "posix" 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "sock_impl_set_options", 00:30:22.316 "params": { 00:30:22.316 "impl_name": "ssl", 00:30:22.316 "recv_buf_size": 4096, 00:30:22.316 "send_buf_size": 4096, 00:30:22.316 "enable_recv_pipe": true, 00:30:22.316 "enable_quickack": false, 00:30:22.316 "enable_placement_id": 0, 00:30:22.316 "enable_zerocopy_send_server": true, 00:30:22.316 "enable_zerocopy_send_client": false, 00:30:22.316 "zerocopy_threshold": 0, 00:30:22.316 "tls_version": 0, 00:30:22.316 "enable_ktls": false 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "sock_impl_set_options", 00:30:22.316 "params": { 00:30:22.316 "impl_name": "posix", 00:30:22.316 "recv_buf_size": 2097152, 00:30:22.316 "send_buf_size": 2097152, 00:30:22.316 "enable_recv_pipe": true, 00:30:22.316 "enable_quickack": false, 00:30:22.316 "enable_placement_id": 0, 00:30:22.316 "enable_zerocopy_send_server": true, 00:30:22.316 "enable_zerocopy_send_client": false, 00:30:22.316 "zerocopy_threshold": 0, 00:30:22.316 "tls_version": 0, 00:30:22.316 "enable_ktls": false 00:30:22.316 } 00:30:22.316 } 00:30:22.316 ] 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "subsystem": "vmd", 00:30:22.316 "config": [] 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "subsystem": "accel", 00:30:22.316 "config": [ 00:30:22.316 { 00:30:22.316 "method": "accel_set_options", 00:30:22.316 "params": { 00:30:22.316 "small_cache_size": 128, 00:30:22.316 "large_cache_size": 16, 00:30:22.316 "task_count": 2048, 00:30:22.316 "sequence_count": 2048, 00:30:22.316 "buf_count": 2048 00:30:22.316 } 00:30:22.316 } 00:30:22.316 ] 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "subsystem": "bdev", 00:30:22.316 "config": [ 00:30:22.316 { 00:30:22.316 "method": "bdev_set_options", 00:30:22.316 "params": { 00:30:22.316 "bdev_io_pool_size": 65535, 00:30:22.316 "bdev_io_cache_size": 256, 00:30:22.316 "bdev_auto_examine": true, 00:30:22.316 "iobuf_small_cache_size": 128, 00:30:22.316 "iobuf_large_cache_size": 16 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "bdev_raid_set_options", 00:30:22.316 "params": { 00:30:22.316 "process_window_size_kb": 1024 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "bdev_iscsi_set_options", 00:30:22.316 "params": { 00:30:22.316 "timeout_sec": 30 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "bdev_nvme_set_options", 00:30:22.316 "params": { 00:30:22.316 "action_on_timeout": "none", 00:30:22.316 "timeout_us": 0, 00:30:22.316 "timeout_admin_us": 0, 00:30:22.316 "keep_alive_timeout_ms": 10000, 00:30:22.316 "arbitration_burst": 0, 00:30:22.316 "low_priority_weight": 0, 00:30:22.316 "medium_priority_weight": 0, 00:30:22.316 "high_priority_weight": 0, 00:30:22.316 "nvme_adminq_poll_period_us": 10000, 00:30:22.316 "nvme_ioq_poll_period_us": 0, 00:30:22.316 "io_queue_requests": 512, 00:30:22.316 "delay_cmd_submit": true, 00:30:22.316 "transport_retry_count": 4, 00:30:22.316 "bdev_retry_count": 3, 00:30:22.316 "transport_ack_timeout": 0, 00:30:22.316 "ctrlr_loss_timeout_sec": 0, 00:30:22.316 "reconnect_delay_sec": 0, 00:30:22.316 "fast_io_fail_timeout_sec": 0, 00:30:22.316 "disable_auto_failback": false, 00:30:22.316 "generate_uuids": false, 00:30:22.316 "transport_tos": 0, 00:30:22.316 "nvme_error_stat": false, 00:30:22.316 "rdma_srq_size": 0, 00:30:22.316 "io_path_stat": false, 00:30:22.316 "allow_accel_sequence": false, 00:30:22.316 "rdma_max_cq_size": 0, 00:30:22.316 "rdma_cm_event_timeout_ms": 0, 00:30:22.316 "dhchap_digests": [ 00:30:22.316 "sha256", 00:30:22.316 "sha384", 00:30:22.316 "sha512" 00:30:22.316 ], 00:30:22.316 "dhchap_dhgroups": [ 00:30:22.316 "null", 00:30:22.316 "ffdhe2048", 00:30:22.316 "ffdhe3072", 00:30:22.316 "ffdhe4096", 00:30:22.316 "ffdhe6144", 00:30:22.316 "ffdhe8192" 00:30:22.316 ] 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "bdev_nvme_attach_controller", 00:30:22.316 "params": { 00:30:22.316 "name": "nvme0", 00:30:22.316 "trtype": "TCP", 00:30:22.316 "adrfam": "IPv4", 00:30:22.316 "traddr": "127.0.0.1", 00:30:22.316 "trsvcid": "4420", 00:30:22.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.316 "prchk_reftag": false, 00:30:22.316 "prchk_guard": false, 00:30:22.316 "ctrlr_loss_timeout_sec": 0, 00:30:22.316 "reconnect_delay_sec": 0, 00:30:22.316 "fast_io_fail_timeout_sec": 0, 00:30:22.316 "psk": "key0", 00:30:22.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.316 "hdgst": false, 00:30:22.316 "ddgst": false 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "bdev_nvme_set_hotplug", 00:30:22.316 "params": { 00:30:22.316 "period_us": 100000, 00:30:22.316 "enable": false 00:30:22.316 } 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "method": "bdev_wait_for_examine" 00:30:22.316 } 00:30:22.316 ] 00:30:22.316 }, 00:30:22.316 { 00:30:22.316 "subsystem": "nbd", 00:30:22.316 "config": [] 00:30:22.316 } 00:30:22.317 ] 00:30:22.317 }' 00:30:22.317 23:56:11 keyring_file -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:22.317 23:56:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:22.317 [2024-07-15 23:56:11.265039] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:30:22.317 [2024-07-15 23:56:11.265093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205962 ] 00:30:22.595 [2024-07-15 23:56:11.319503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.595 [2024-07-15 23:56:11.387827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.595 [2024-07-15 23:56:11.546133] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:23.160 23:56:12 keyring_file -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:23.160 23:56:12 keyring_file -- common/autotest_common.sh@856 -- # return 0 00:30:23.160 23:56:12 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:23.160 23:56:12 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:23.160 23:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.418 23:56:12 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:23.418 23:56:12 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:23.418 23:56:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:23.418 23:56:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:23.418 23:56:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:23.418 23:56:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:23.418 23:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.676 23:56:12 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:23.676 23:56:12 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:23.676 23:56:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:23.676 23:56:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:23.676 23:56:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:23.676 23:56:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:23.676 23:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.676 23:56:12 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:23.676 23:56:12 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:23.676 23:56:12 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:23.676 23:56:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:23.934 23:56:12 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:23.935 23:56:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:23.935 23:56:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2OVKC9HFwr /tmp/tmp.0I7zo42uIn 00:30:23.935 23:56:12 keyring_file -- keyring/file.sh@20 -- # killprocess 1205962 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1205962 ']' 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1205962 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@947 -- # uname 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1205962 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1205962' 00:30:23.935 killing process with pid 1205962 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@961 -- # kill 1205962 00:30:23.935 Received shutdown signal, test time was about 1.000000 seconds 00:30:23.935 00:30:23.935 Latency(us) 00:30:23.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.935 =================================================================================================================== 00:30:23.935 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:23.935 23:56:12 keyring_file -- common/autotest_common.sh@966 -- # wait 1205962 00:30:24.192 23:56:13 keyring_file -- keyring/file.sh@21 -- # killprocess 1204226 00:30:24.192 23:56:13 keyring_file -- common/autotest_common.sh@942 -- # '[' -z 1204226 ']' 00:30:24.192 23:56:13 keyring_file -- common/autotest_common.sh@946 -- # kill -0 1204226 00:30:24.192 23:56:13 keyring_file -- common/autotest_common.sh@947 -- # uname 00:30:24.192 23:56:13 keyring_file -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:24.192 23:56:13 keyring_file -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1204226 00:30:24.193 23:56:13 keyring_file -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:30:24.193 23:56:13 keyring_file -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:30:24.193 23:56:13 keyring_file -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1204226' 00:30:24.193 killing process with pid 1204226 00:30:24.193 23:56:13 keyring_file -- common/autotest_common.sh@961 -- # kill 1204226 00:30:24.193 [2024-07-15 23:56:13.066466] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:24.193 23:56:13 keyring_file -- common/autotest_common.sh@966 -- # wait 1204226 00:30:24.516 00:30:24.516 real 0m11.941s 00:30:24.516 user 0m28.078s 00:30:24.516 sys 0m2.716s 00:30:24.516 23:56:13 keyring_file -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:24.516 23:56:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:24.516 ************************************ 00:30:24.516 END TEST keyring_file 00:30:24.516 ************************************ 00:30:24.516 23:56:13 -- common/autotest_common.sh@1136 -- # return 0 00:30:24.516 23:56:13 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:24.516 23:56:13 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:24.516 23:56:13 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:30:24.516 23:56:13 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:30:24.516 23:56:13 -- common/autotest_common.sh@10 -- # set +x 00:30:24.516 ************************************ 00:30:24.516 START TEST keyring_linux 00:30:24.516 ************************************ 00:30:24.516 23:56:13 keyring_linux -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:24.774 * Looking for test storage... 00:30:24.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:24.774 23:56:13 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.774 23:56:13 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.774 23:56:13 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.774 23:56:13 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.774 23:56:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.774 23:56:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.774 23:56:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.774 23:56:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:24.774 23:56:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:24.774 23:56:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:24.774 23:56:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:24.774 23:56:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:24.774 23:56:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:24.774 23:56:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:24.774 23:56:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:24.774 23:56:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:24.774 23:56:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:24.775 /tmp/:spdk-test:key0 00:30:24.775 23:56:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:24.775 23:56:13 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:24.775 23:56:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:24.775 /tmp/:spdk-test:key1 00:30:24.775 23:56:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1206511 00:30:24.775 23:56:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1206511 00:30:24.775 23:56:13 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:24.775 23:56:13 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 1206511 ']' 00:30:24.775 23:56:13 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.775 23:56:13 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:24.775 23:56:13 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.775 23:56:13 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:24.775 23:56:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:24.775 [2024-07-15 23:56:13.699108] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:30:24.775 [2024-07-15 23:56:13.699159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206511 ] 00:30:25.032 [2024-07-15 23:56:13.751770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.032 [2024-07-15 23:56:13.831389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:30:25.596 23:56:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@553 -- # xtrace_disable 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:25.596 [2024-07-15 23:56:14.499043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.596 null0 00:30:25.596 [2024-07-15 23:56:14.531096] tcp.c: 942:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:25.596 [2024-07-15 23:56:14.531419] tcp.c: 981:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:30:25.596 23:56:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:25.596 159384431 00:30:25.596 23:56:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:25.596 391782311 00:30:25.596 23:56:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1206523 00:30:25.596 23:56:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1206523 /var/tmp/bperf.sock 00:30:25.596 23:56:14 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@823 -- # '[' -z 1206523 ']' 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@828 -- # local max_retries=100 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:25.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@832 -- # xtrace_disable 00:30:25.596 23:56:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:25.853 [2024-07-15 23:56:14.600711] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:30:25.853 [2024-07-15 23:56:14.600754] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206523 ] 00:30:25.853 [2024-07-15 23:56:14.655580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.853 [2024-07-15 23:56:14.735222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.788 23:56:15 keyring_linux -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:30:26.788 23:56:15 keyring_linux -- common/autotest_common.sh@856 -- # return 0 00:30:26.788 23:56:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:26.788 23:56:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:26.788 23:56:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:26.788 23:56:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:27.047 23:56:15 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:27.047 23:56:15 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:27.047 [2024-07-15 23:56:15.991962] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:27.304 nvme0n1 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:27.304 23:56:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:27.304 23:56:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:27.305 23:56:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:27.305 23:56:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:27.305 23:56:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.563 23:56:16 keyring_linux -- keyring/linux.sh@25 -- # sn=159384431 00:30:27.563 23:56:16 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:27.563 23:56:16 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:27.563 23:56:16 keyring_linux -- keyring/linux.sh@26 -- # [[ 159384431 == \1\5\9\3\8\4\4\3\1 ]] 00:30:27.563 23:56:16 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 159384431 00:30:27.563 23:56:16 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:27.563 23:56:16 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:27.563 Running I/O for 1 seconds... 00:30:28.938 00:30:28.938 Latency(us) 00:30:28.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:28.938 nvme0n1 : 1.01 12275.09 47.95 0.00 0.00 10383.96 7693.36 18805.98 00:30:28.938 =================================================================================================================== 00:30:28.938 Total : 12275.09 47.95 0.00 0.00 10383.96 7693.36 18805.98 00:30:28.938 0 00:30:28.938 23:56:17 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:28.938 23:56:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:28.938 23:56:17 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:28.938 23:56:17 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:28.939 23:56:17 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:28.939 23:56:17 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:28.939 23:56:17 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:28.939 23:56:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:29.197 23:56:17 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:29.197 23:56:17 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:29.197 23:56:17 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:29.197 23:56:17 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.197 23:56:17 keyring_linux -- common/autotest_common.sh@642 -- # local es=0 00:30:29.197 23:56:17 keyring_linux -- common/autotest_common.sh@644 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.197 23:56:17 keyring_linux -- common/autotest_common.sh@630 -- # local arg=bperf_cmd 00:30:29.197 23:56:17 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:29.197 23:56:17 keyring_linux -- common/autotest_common.sh@634 -- # type -t bperf_cmd 00:30:29.197 23:56:17 keyring_linux -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:30:29.197 23:56:17 keyring_linux -- common/autotest_common.sh@645 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.197 23:56:17 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:29.197 [2024-07-15 23:56:18.082997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:29.197 [2024-07-15 23:56:18.083538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47fd0 (107): Transport endpoint is not connected 00:30:29.197 [2024-07-15 23:56:18.084532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b47fd0 (9): Bad file descriptor 00:30:29.197 [2024-07-15 23:56:18.085533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:29.197 [2024-07-15 23:56:18.085544] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:29.197 [2024-07-15 23:56:18.085550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:29.197 request: 00:30:29.197 { 00:30:29.197 "name": "nvme0", 00:30:29.197 "trtype": "tcp", 00:30:29.197 "traddr": "127.0.0.1", 00:30:29.197 "adrfam": "ipv4", 00:30:29.197 "trsvcid": "4420", 00:30:29.197 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:29.197 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:29.197 "prchk_reftag": false, 00:30:29.197 "prchk_guard": false, 00:30:29.197 "hdgst": false, 00:30:29.197 "ddgst": false, 00:30:29.197 "psk": ":spdk-test:key1", 00:30:29.197 "method": "bdev_nvme_attach_controller", 00:30:29.197 "req_id": 1 00:30:29.197 } 00:30:29.197 Got JSON-RPC error response 00:30:29.197 response: 00:30:29.197 { 00:30:29.197 "code": -5, 00:30:29.197 "message": "Input/output error" 00:30:29.197 } 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@645 -- # es=1 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@33 -- # sn=159384431 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 159384431 00:30:29.197 1 links removed 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@33 -- # sn=391782311 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 391782311 00:30:29.197 1 links removed 00:30:29.197 23:56:18 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1206523 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 1206523 ']' 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 1206523 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1206523 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:30:29.197 23:56:18 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1206523' 00:30:29.198 killing process with pid 1206523 00:30:29.198 23:56:18 keyring_linux -- common/autotest_common.sh@961 -- # kill 1206523 00:30:29.198 Received shutdown signal, test time was about 1.000000 seconds 00:30:29.198 00:30:29.198 Latency(us) 00:30:29.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.198 =================================================================================================================== 00:30:29.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:29.198 23:56:18 keyring_linux -- common/autotest_common.sh@966 -- # wait 1206523 00:30:29.456 23:56:18 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1206511 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@942 -- # '[' -z 1206511 ']' 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@946 -- # kill -0 1206511 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@947 -- # uname 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1206511 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1206511' 00:30:29.456 killing process with pid 1206511 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@961 -- # kill 1206511 00:30:29.456 23:56:18 keyring_linux -- common/autotest_common.sh@966 -- # wait 1206511 00:30:30.024 00:30:30.024 real 0m5.239s 00:30:30.024 user 0m9.280s 00:30:30.024 sys 0m1.371s 00:30:30.024 23:56:18 keyring_linux -- common/autotest_common.sh@1118 -- # xtrace_disable 00:30:30.024 23:56:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:30.024 ************************************ 00:30:30.024 END TEST keyring_linux 00:30:30.024 ************************************ 00:30:30.024 23:56:18 -- common/autotest_common.sh@1136 -- # return 0 00:30:30.024 23:56:18 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:30.024 23:56:18 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:30.024 23:56:18 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:30.024 23:56:18 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:30.024 23:56:18 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:30.024 23:56:18 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:30.024 23:56:18 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:30.024 23:56:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:30.024 23:56:18 -- common/autotest_common.sh@10 -- # set +x 00:30:30.024 23:56:18 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:30.024 23:56:18 -- common/autotest_common.sh@1386 -- # local autotest_es=0 00:30:30.024 23:56:18 -- common/autotest_common.sh@1387 -- # xtrace_disable 00:30:30.024 23:56:18 -- common/autotest_common.sh@10 -- # set +x 00:30:34.292 INFO: APP EXITING 00:30:34.292 INFO: killing all VMs 00:30:34.292 INFO: killing vhost app 00:30:34.292 INFO: EXIT DONE 00:30:36.830 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:36.830 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:36.830 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:39.369 Cleaning 00:30:39.369 Removing: /var/run/dpdk/spdk0/config 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:39.369 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:39.369 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:39.369 Removing: /var/run/dpdk/spdk1/config 00:30:39.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:39.369 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:39.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:39.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:39.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:39.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:39.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:39.370 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:39.370 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:39.370 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:39.370 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:39.370 Removing: /var/run/dpdk/spdk2/config 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:39.370 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:39.370 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:39.370 Removing: /var/run/dpdk/spdk3/config 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:39.370 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:39.370 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:39.370 Removing: /var/run/dpdk/spdk4/config 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:39.370 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:39.370 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:39.370 Removing: /dev/shm/bdev_svc_trace.1 00:30:39.370 Removing: /dev/shm/nvmf_trace.0 00:30:39.370 Removing: /dev/shm/spdk_tgt_trace.pid823245 00:30:39.370 Removing: /var/run/dpdk/spdk0 00:30:39.370 Removing: /var/run/dpdk/spdk1 00:30:39.370 Removing: /var/run/dpdk/spdk2 00:30:39.370 Removing: /var/run/dpdk/spdk3 00:30:39.370 Removing: /var/run/dpdk/spdk4 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1002431 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1027280 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1031768 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1033372 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1035342 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1035579 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1035811 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1036047 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1036913 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1038848 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1039814 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1040275 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1042560 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1043107 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1043832 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1047882 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1057818 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1061853 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1067834 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1069253 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1070676 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1074962 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1078988 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1086854 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1086857 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1091559 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1091794 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1091977 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1092259 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1092282 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1096736 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1097307 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1101518 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1104174 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1109558 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1114908 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1123452 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1130550 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1130598 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1148974 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1149502 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1150156 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1150854 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1151814 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1152344 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1152997 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1153690 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1157944 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1158177 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1164018 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1164293 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1166512 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1174233 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1174240 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1179780 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1181678 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1183544 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1184755 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1186723 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1187788 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1196290 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1196749 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1197414 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1199548 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1200114 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1200628 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1204226 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1204450 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1205962 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1206511 00:30:39.370 Removing: /var/run/dpdk/spdk_pid1206523 00:30:39.370 Removing: /var/run/dpdk/spdk_pid821117 00:30:39.370 Removing: /var/run/dpdk/spdk_pid822179 00:30:39.370 Removing: /var/run/dpdk/spdk_pid823245 00:30:39.370 Removing: /var/run/dpdk/spdk_pid823898 00:30:39.370 Removing: /var/run/dpdk/spdk_pid824841 00:30:39.370 Removing: /var/run/dpdk/spdk_pid825083 00:30:39.370 Removing: /var/run/dpdk/spdk_pid826054 00:30:39.370 Removing: /var/run/dpdk/spdk_pid826289 00:30:39.370 Removing: /var/run/dpdk/spdk_pid826415 00:30:39.370 Removing: /var/run/dpdk/spdk_pid828043 00:30:39.370 Removing: /var/run/dpdk/spdk_pid829175 00:30:39.370 Removing: /var/run/dpdk/spdk_pid829453 00:30:39.370 Removing: /var/run/dpdk/spdk_pid829747 00:30:39.370 Removing: /var/run/dpdk/spdk_pid830056 00:30:39.370 Removing: /var/run/dpdk/spdk_pid830405 00:30:39.370 Removing: /var/run/dpdk/spdk_pid830603 00:30:39.370 Removing: /var/run/dpdk/spdk_pid830836 00:30:39.370 Removing: /var/run/dpdk/spdk_pid831114 00:30:39.370 Removing: /var/run/dpdk/spdk_pid831976 00:30:39.370 Removing: /var/run/dpdk/spdk_pid834835 00:30:39.370 Removing: /var/run/dpdk/spdk_pid835101 00:30:39.370 Removing: /var/run/dpdk/spdk_pid835362 00:30:39.370 Removing: /var/run/dpdk/spdk_pid835592 00:30:39.370 Removing: /var/run/dpdk/spdk_pid836085 00:30:39.370 Removing: /var/run/dpdk/spdk_pid836099 00:30:39.370 Removing: /var/run/dpdk/spdk_pid836587 00:30:39.370 Removing: /var/run/dpdk/spdk_pid836708 00:30:39.370 Removing: /var/run/dpdk/spdk_pid837072 00:30:39.370 Removing: /var/run/dpdk/spdk_pid837097 00:30:39.370 Removing: /var/run/dpdk/spdk_pid837353 00:30:39.370 Removing: /var/run/dpdk/spdk_pid837583 00:30:39.370 Removing: /var/run/dpdk/spdk_pid837925 00:30:39.370 Removing: /var/run/dpdk/spdk_pid838176 00:30:39.370 Removing: /var/run/dpdk/spdk_pid838461 00:30:39.370 Removing: /var/run/dpdk/spdk_pid838733 00:30:39.370 Removing: /var/run/dpdk/spdk_pid838897 00:30:39.370 Removing: /var/run/dpdk/spdk_pid839038 00:30:39.370 Removing: /var/run/dpdk/spdk_pid839285 00:30:39.370 Removing: /var/run/dpdk/spdk_pid839533 00:30:39.370 Removing: /var/run/dpdk/spdk_pid839784 00:30:39.370 Removing: /var/run/dpdk/spdk_pid840031 00:30:39.370 Removing: /var/run/dpdk/spdk_pid840278 00:30:39.370 Removing: /var/run/dpdk/spdk_pid840532 00:30:39.370 Removing: /var/run/dpdk/spdk_pid840781 00:30:39.370 Removing: /var/run/dpdk/spdk_pid841034 00:30:39.370 Removing: /var/run/dpdk/spdk_pid841289 00:30:39.370 Removing: /var/run/dpdk/spdk_pid841536 00:30:39.370 Removing: /var/run/dpdk/spdk_pid841781 00:30:39.370 Removing: /var/run/dpdk/spdk_pid842036 00:30:39.370 Removing: /var/run/dpdk/spdk_pid842282 00:30:39.370 Removing: /var/run/dpdk/spdk_pid842535 00:30:39.370 Removing: /var/run/dpdk/spdk_pid842780 00:30:39.370 Removing: /var/run/dpdk/spdk_pid843032 00:30:39.370 Removing: /var/run/dpdk/spdk_pid843288 00:30:39.370 Removing: /var/run/dpdk/spdk_pid843537 00:30:39.370 Removing: /var/run/dpdk/spdk_pid843786 00:30:39.370 Removing: /var/run/dpdk/spdk_pid844039 00:30:39.370 Removing: /var/run/dpdk/spdk_pid844253 00:30:39.370 Removing: /var/run/dpdk/spdk_pid844630 00:30:39.370 Removing: /var/run/dpdk/spdk_pid848172 00:30:39.370 Removing: /var/run/dpdk/spdk_pid891909 00:30:39.370 Removing: /var/run/dpdk/spdk_pid896155 00:30:39.370 Removing: /var/run/dpdk/spdk_pid906297 00:30:39.370 Removing: /var/run/dpdk/spdk_pid911866 00:30:39.370 Removing: /var/run/dpdk/spdk_pid915866 00:30:39.370 Removing: /var/run/dpdk/spdk_pid916549 00:30:39.370 Removing: /var/run/dpdk/spdk_pid922545 00:30:39.370 Removing: /var/run/dpdk/spdk_pid928576 00:30:39.370 Removing: /var/run/dpdk/spdk_pid928636 00:30:39.370 Removing: /var/run/dpdk/spdk_pid929497 00:30:39.371 Removing: /var/run/dpdk/spdk_pid930413 00:30:39.371 Removing: /var/run/dpdk/spdk_pid931330 00:30:39.371 Removing: /var/run/dpdk/spdk_pid931797 00:30:39.371 Removing: /var/run/dpdk/spdk_pid931810 00:30:39.371 Removing: /var/run/dpdk/spdk_pid932091 00:30:39.371 Removing: /var/run/dpdk/spdk_pid932254 00:30:39.371 Removing: /var/run/dpdk/spdk_pid932268 00:30:39.371 Removing: /var/run/dpdk/spdk_pid933178 00:30:39.371 Removing: /var/run/dpdk/spdk_pid934084 00:30:39.371 Removing: /var/run/dpdk/spdk_pid934841 00:30:39.371 Removing: /var/run/dpdk/spdk_pid935481 00:30:39.371 Removing: /var/run/dpdk/spdk_pid935483 00:30:39.371 Removing: /var/run/dpdk/spdk_pid935716 00:30:39.371 Removing: /var/run/dpdk/spdk_pid936958 00:30:39.371 Removing: /var/run/dpdk/spdk_pid937996 00:30:39.631 Removing: /var/run/dpdk/spdk_pid946268 00:30:39.631 Removing: /var/run/dpdk/spdk_pid946812 00:30:39.631 Removing: /var/run/dpdk/spdk_pid951278 00:30:39.631 Removing: /var/run/dpdk/spdk_pid957138 00:30:39.631 Removing: /var/run/dpdk/spdk_pid959607 00:30:39.631 Removing: /var/run/dpdk/spdk_pid969909 00:30:39.631 Removing: /var/run/dpdk/spdk_pid978814 00:30:39.631 Removing: /var/run/dpdk/spdk_pid980639 00:30:39.631 Removing: /var/run/dpdk/spdk_pid981566 00:30:39.631 Removing: /var/run/dpdk/spdk_pid998272 00:30:39.631 Clean 00:30:39.631 23:56:28 -- common/autotest_common.sh@1445 -- # return 0 00:30:39.631 23:56:28 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:39.631 23:56:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:39.631 23:56:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.631 23:56:28 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:39.631 23:56:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:39.631 23:56:28 -- common/autotest_common.sh@10 -- # set +x 00:30:39.631 23:56:28 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:39.631 23:56:28 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:39.631 23:56:28 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:39.631 23:56:28 -- spdk/autotest.sh@391 -- # hash lcov 00:30:39.631 23:56:28 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:39.631 23:56:28 -- spdk/autotest.sh@393 -- # hostname 00:30:39.631 23:56:28 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:39.890 geninfo: WARNING: invalid characters removed from testname! 00:31:01.820 23:56:48 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:02.385 23:56:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:04.284 23:56:53 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:06.187 23:56:54 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:08.113 23:56:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:10.018 23:56:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:11.460 23:57:00 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:11.719 23:57:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.719 23:57:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:11.719 23:57:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.719 23:57:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.719 23:57:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.719 23:57:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.719 23:57:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.719 23:57:00 -- paths/export.sh@5 -- $ export PATH 00:31:11.719 23:57:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.719 23:57:00 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:11.719 23:57:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:31:11.719 23:57:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721080620.XXXXXX 00:31:11.719 23:57:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721080620.WrAkPa 00:31:11.719 23:57:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:31:11.719 23:57:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:31:11.719 23:57:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:11.719 23:57:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:11.719 23:57:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:11.719 23:57:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:31:11.719 23:57:00 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:31:11.719 23:57:00 -- common/autotest_common.sh@10 -- $ set +x 00:31:11.719 23:57:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:11.719 23:57:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:31:11.719 23:57:00 -- pm/common@17 -- $ local monitor 00:31:11.719 23:57:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.719 23:57:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.719 23:57:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.719 23:57:00 -- pm/common@21 -- $ date +%s 00:31:11.719 23:57:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.719 23:57:00 -- pm/common@21 -- $ date +%s 00:31:11.719 23:57:00 -- pm/common@25 -- $ sleep 1 00:31:11.719 23:57:00 -- pm/common@21 -- $ date +%s 00:31:11.719 23:57:00 -- pm/common@21 -- $ date +%s 00:31:11.719 23:57:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080620 00:31:11.719 23:57:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080620 00:31:11.719 23:57:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080620 00:31:11.719 23:57:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080620 00:31:11.719 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080620_collect-vmstat.pm.log 00:31:11.719 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080620_collect-cpu-load.pm.log 00:31:11.719 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080620_collect-cpu-temp.pm.log 00:31:11.719 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080620_collect-bmc-pm.bmc.pm.log 00:31:12.656 23:57:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:31:12.656 23:57:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:12.656 23:57:01 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:12.656 23:57:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:12.656 23:57:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:12.656 23:57:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:12.656 23:57:01 -- common/autotest_common.sh@728 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:12.656 23:57:01 -- common/autotest_common.sh@729 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:12.656 23:57:01 -- common/autotest_common.sh@731 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:12.656 23:57:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:12.656 23:57:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:12.656 23:57:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:12.656 23:57:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:12.656 23:57:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.656 23:57:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:12.656 23:57:01 -- pm/common@44 -- $ pid=1216458 00:31:12.656 23:57:01 -- pm/common@50 -- $ kill -TERM 1216458 00:31:12.656 23:57:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.656 23:57:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:12.656 23:57:01 -- pm/common@44 -- $ pid=1216460 00:31:12.656 23:57:01 -- pm/common@50 -- $ kill -TERM 1216460 00:31:12.656 23:57:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.656 23:57:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:12.656 23:57:01 -- pm/common@44 -- $ pid=1216462 00:31:12.656 23:57:01 -- pm/common@50 -- $ kill -TERM 1216462 00:31:12.656 23:57:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.656 23:57:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:12.656 23:57:01 -- pm/common@44 -- $ pid=1216485 00:31:12.656 23:57:01 -- pm/common@50 -- $ sudo -E kill -TERM 1216485 00:31:12.656 + [[ -n 718454 ]] 00:31:12.656 + sudo kill 718454 00:31:12.666 [Pipeline] } 00:31:12.684 [Pipeline] // stage 00:31:12.690 [Pipeline] } 00:31:12.712 [Pipeline] // timeout 00:31:12.717 [Pipeline] } 00:31:12.730 [Pipeline] // catchError 00:31:12.735 [Pipeline] } 00:31:12.750 [Pipeline] // wrap 00:31:12.756 [Pipeline] } 00:31:12.770 [Pipeline] // catchError 00:31:12.778 [Pipeline] stage 00:31:12.779 [Pipeline] { (Epilogue) 00:31:12.791 [Pipeline] catchError 00:31:12.792 [Pipeline] { 00:31:12.806 [Pipeline] echo 00:31:12.808 Cleanup processes 00:31:12.814 [Pipeline] sh 00:31:13.095 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:13.095 1216690 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:13.095 1217018 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:13.109 [Pipeline] sh 00:31:13.390 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:13.390 ++ grep -v 'sudo pgrep' 00:31:13.390 ++ awk '{print $1}' 00:31:13.390 + sudo kill -9 1216690 00:31:13.400 [Pipeline] sh 00:31:13.680 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:23.678 [Pipeline] sh 00:31:23.962 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:23.962 Artifacts sizes are good 00:31:23.977 [Pipeline] archiveArtifacts 00:31:23.983 Archiving artifacts 00:31:24.157 [Pipeline] sh 00:31:24.442 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:24.460 [Pipeline] cleanWs 00:31:24.475 [WS-CLEANUP] Deleting project workspace... 00:31:24.475 [WS-CLEANUP] Deferred wipeout is used... 00:31:24.482 [WS-CLEANUP] done 00:31:24.483 [Pipeline] } 00:31:24.497 [Pipeline] // catchError 00:31:24.505 [Pipeline] sh 00:31:24.779 + logger -p user.info -t JENKINS-CI 00:31:24.788 [Pipeline] } 00:31:24.802 [Pipeline] // stage 00:31:24.805 [Pipeline] } 00:31:24.821 [Pipeline] // node 00:31:24.826 [Pipeline] End of Pipeline 00:31:24.853 Finished: SUCCESS